00:00:00.000 Started by upstream project "autotest-per-patch" build number 132377 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.126 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.127 The recommended git tool is: git 00:00:00.127 using credential 00000000-0000-0000-0000-000000000002 00:00:00.129 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.172 Fetching changes from the remote Git repository 00:00:00.173 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.216 Using shallow fetch with depth 1 00:00:00.216 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.216 > git --version # timeout=10 00:00:00.255 > git --version # 'git version 2.39.2' 00:00:00.255 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.284 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.284 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.785 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.796 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.808 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.808 > git config core.sparsecheckout # timeout=10 00:00:06.819 > git read-tree -mu HEAD # timeout=10 00:00:06.837 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.858 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.858 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.978 [Pipeline] Start of Pipeline 00:00:06.993 [Pipeline] library 00:00:06.995 Loading library shm_lib@master 00:00:06.995 Library shm_lib@master is cached. Copying from home. 00:00:07.011 [Pipeline] node 00:00:07.019 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.021 [Pipeline] { 00:00:07.032 [Pipeline] catchError 00:00:07.034 [Pipeline] { 00:00:07.044 [Pipeline] wrap 00:00:07.052 [Pipeline] { 00:00:07.060 [Pipeline] stage 00:00:07.062 [Pipeline] { (Prologue) 00:00:07.254 [Pipeline] sh 00:00:07.537 + logger -p user.info -t JENKINS-CI 00:00:07.559 [Pipeline] echo 00:00:07.561 Node: CYP9 00:00:07.568 [Pipeline] sh 00:00:07.872 [Pipeline] setCustomBuildProperty 00:00:07.884 [Pipeline] echo 00:00:07.886 Cleanup processes 00:00:07.891 [Pipeline] sh 00:00:08.178 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.178 2403840 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.196 [Pipeline] sh 00:00:08.488 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.488 ++ grep -v 'sudo pgrep' 00:00:08.488 ++ awk '{print $1}' 00:00:08.488 + sudo kill -9 00:00:08.488 + true 00:00:08.504 [Pipeline] cleanWs 00:00:08.515 [WS-CLEANUP] Deleting project workspace... 00:00:08.515 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.521 [WS-CLEANUP] done 00:00:08.526 [Pipeline] setCustomBuildProperty 00:00:08.542 [Pipeline] sh 00:00:08.830 + sudo git config --global --replace-all safe.directory '*' 00:00:08.929 [Pipeline] httpRequest 00:00:09.637 [Pipeline] echo 00:00:09.639 Sorcerer 10.211.164.20 is alive 00:00:09.649 [Pipeline] retry 00:00:09.651 [Pipeline] { 00:00:09.666 [Pipeline] httpRequest 00:00:09.670 HttpMethod: GET 00:00:09.670 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.671 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.680 Response Code: HTTP/1.1 200 OK 00:00:09.680 Success: Status code 200 is in the accepted range: 200,404 00:00:09.681 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.663 [Pipeline] } 00:00:11.680 [Pipeline] // retry 00:00:11.688 [Pipeline] sh 00:00:11.977 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.995 [Pipeline] httpRequest 00:00:12.568 [Pipeline] echo 00:00:12.570 Sorcerer 10.211.164.20 is alive 00:00:12.579 [Pipeline] retry 00:00:12.581 [Pipeline] { 00:00:12.595 [Pipeline] httpRequest 00:00:12.600 HttpMethod: GET 00:00:12.600 URL: http://10.211.164.20/packages/spdk_4d3e9954dd8b6239740cc12ccf8e341f782d03a8.tar.gz 00:00:12.601 Sending request to url: http://10.211.164.20/packages/spdk_4d3e9954dd8b6239740cc12ccf8e341f782d03a8.tar.gz 00:00:12.617 Response Code: HTTP/1.1 200 OK 00:00:12.617 Success: Status code 200 is in the accepted range: 200,404 00:00:12.618 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_4d3e9954dd8b6239740cc12ccf8e341f782d03a8.tar.gz 00:01:15.375 [Pipeline] } 00:01:15.394 [Pipeline] // retry 00:01:15.403 [Pipeline] sh 00:01:15.693 + tar --no-same-owner -xf spdk_4d3e9954dd8b6239740cc12ccf8e341f782d03a8.tar.gz 00:01:19.006 [Pipeline] sh 00:01:19.295 + git -C spdk log --oneline -n5 00:01:19.295 4d3e9954d test/nvme/xnvme: Add different io patterns 00:01:19.295 d5455995c test/nvme/xnvme: Add simple RPC validation test 00:01:19.295 69d73d129 test/nvme/xnvme: Add simple test with SPDK's fio plugin 00:01:19.295 637d0d0b9 scripts/rpc: Fix conserve_cpu arg in bdev_xnvme_create() 00:01:19.295 1c2e5edaf bdev/xnvme: Make sure conserve_cpu opt is preserved in the struct 00:01:19.307 [Pipeline] } 00:01:19.323 [Pipeline] // stage 00:01:19.333 [Pipeline] stage 00:01:19.335 [Pipeline] { (Prepare) 00:01:19.352 [Pipeline] writeFile 00:01:19.367 [Pipeline] sh 00:01:19.654 + logger -p user.info -t JENKINS-CI 00:01:19.667 [Pipeline] sh 00:01:19.954 + logger -p user.info -t JENKINS-CI 00:01:19.966 [Pipeline] sh 00:01:20.256 + cat autorun-spdk.conf 00:01:20.256 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.256 SPDK_TEST_NVMF=1 00:01:20.256 SPDK_TEST_NVME_CLI=1 00:01:20.256 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.256 SPDK_TEST_NVMF_NICS=e810 00:01:20.256 SPDK_TEST_VFIOUSER=1 00:01:20.256 SPDK_RUN_UBSAN=1 00:01:20.256 NET_TYPE=phy 00:01:20.264 RUN_NIGHTLY=0 00:01:20.269 [Pipeline] readFile 00:01:20.295 [Pipeline] withEnv 00:01:20.297 [Pipeline] { 00:01:20.309 [Pipeline] sh 00:01:20.597 + set -ex 00:01:20.597 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:20.597 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:20.597 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.597 ++ SPDK_TEST_NVMF=1 00:01:20.597 ++ SPDK_TEST_NVME_CLI=1 00:01:20.597 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.597 ++ SPDK_TEST_NVMF_NICS=e810 00:01:20.597 ++ SPDK_TEST_VFIOUSER=1 00:01:20.597 ++ SPDK_RUN_UBSAN=1 00:01:20.597 ++ NET_TYPE=phy 00:01:20.597 ++ RUN_NIGHTLY=0 00:01:20.597 + case $SPDK_TEST_NVMF_NICS in 00:01:20.597 + DRIVERS=ice 00:01:20.597 + [[ tcp == \r\d\m\a ]] 00:01:20.597 + [[ -n ice ]] 00:01:20.597 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:20.597 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:20.597 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:20.597 rmmod: ERROR: Module irdma is not currently loaded 00:01:20.597 rmmod: ERROR: Module i40iw is not currently loaded 00:01:20.597 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:20.597 + true 00:01:20.597 + for D in $DRIVERS 00:01:20.597 + sudo modprobe ice 00:01:20.597 + exit 0 00:01:20.607 [Pipeline] } 00:01:20.622 [Pipeline] // withEnv 00:01:20.626 [Pipeline] } 00:01:20.640 [Pipeline] // stage 00:01:20.649 [Pipeline] catchError 00:01:20.651 [Pipeline] { 00:01:20.665 [Pipeline] timeout 00:01:20.666 Timeout set to expire in 1 hr 0 min 00:01:20.667 [Pipeline] { 00:01:20.681 [Pipeline] stage 00:01:20.683 [Pipeline] { (Tests) 00:01:20.696 [Pipeline] sh 00:01:20.984 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.984 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.984 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.984 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:20.984 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:20.984 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:20.984 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:20.984 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:20.984 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:20.984 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:20.984 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:20.984 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.984 + source /etc/os-release 00:01:20.984 ++ NAME='Fedora Linux' 00:01:20.984 ++ VERSION='39 (Cloud Edition)' 00:01:20.984 ++ ID=fedora 00:01:20.984 ++ VERSION_ID=39 00:01:20.984 ++ VERSION_CODENAME= 00:01:20.984 ++ PLATFORM_ID=platform:f39 00:01:20.984 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:20.984 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:20.984 ++ LOGO=fedora-logo-icon 00:01:20.984 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:20.984 ++ HOME_URL=https://fedoraproject.org/ 00:01:20.984 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:20.984 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:20.984 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:20.984 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:20.984 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:20.984 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:20.984 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:20.984 ++ SUPPORT_END=2024-11-12 00:01:20.984 ++ VARIANT='Cloud Edition' 00:01:20.984 ++ VARIANT_ID=cloud 00:01:20.984 + uname -a 00:01:20.984 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:20.984 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:24.285 Hugepages 00:01:24.285 node hugesize free / total 00:01:24.285 node0 1048576kB 0 / 0 00:01:24.285 node0 2048kB 0 / 0 00:01:24.285 node1 1048576kB 0 / 0 00:01:24.285 node1 2048kB 0 / 0 00:01:24.285 00:01:24.285 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:24.285 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:24.285 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:24.285 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:24.285 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:24.285 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:24.285 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:24.285 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:24.285 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:24.285 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:24.285 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:24.285 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:24.285 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:24.285 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:24.285 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:24.285 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:24.285 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:24.285 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:24.285 + rm -f /tmp/spdk-ld-path 00:01:24.285 + source autorun-spdk.conf 00:01:24.285 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.285 ++ SPDK_TEST_NVMF=1 00:01:24.285 ++ SPDK_TEST_NVME_CLI=1 00:01:24.285 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:24.285 ++ SPDK_TEST_NVMF_NICS=e810 00:01:24.285 ++ SPDK_TEST_VFIOUSER=1 00:01:24.285 ++ SPDK_RUN_UBSAN=1 00:01:24.285 ++ NET_TYPE=phy 00:01:24.285 ++ RUN_NIGHTLY=0 00:01:24.285 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:24.285 + [[ -n '' ]] 00:01:24.285 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:24.285 + for M in /var/spdk/build-*-manifest.txt 00:01:24.285 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:24.285 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:24.285 + for M in /var/spdk/build-*-manifest.txt 00:01:24.285 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:24.285 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:24.285 + for M in /var/spdk/build-*-manifest.txt 00:01:24.285 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:24.285 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:24.285 ++ uname 00:01:24.285 + [[ Linux == \L\i\n\u\x ]] 00:01:24.285 + sudo dmesg -T 00:01:24.285 + sudo dmesg --clear 00:01:24.285 + dmesg_pid=2404818 00:01:24.285 + [[ Fedora Linux == FreeBSD ]] 00:01:24.285 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:24.285 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:24.285 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:24.285 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:24.285 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:24.285 + [[ -x /usr/src/fio-static/fio ]] 00:01:24.285 + export FIO_BIN=/usr/src/fio-static/fio 00:01:24.285 + sudo dmesg -Tw 00:01:24.285 + FIO_BIN=/usr/src/fio-static/fio 00:01:24.285 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:24.285 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:24.285 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:24.285 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:24.285 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:24.285 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:24.285 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:24.285 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:24.285 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:24.547 11:02:17 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:24.547 11:02:17 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:24.547 11:02:17 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.547 11:02:17 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:24.547 11:02:17 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:24.547 11:02:17 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:24.547 11:02:17 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:24.547 11:02:17 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:24.547 11:02:17 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:24.547 11:02:17 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:24.547 11:02:17 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:24.547 11:02:17 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:24.547 11:02:17 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:24.547 11:02:17 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:24.547 11:02:17 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:24.547 11:02:17 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:24.547 11:02:17 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:24.547 11:02:17 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:24.547 11:02:17 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:24.547 11:02:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.547 11:02:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.547 11:02:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.547 11:02:17 -- paths/export.sh@5 -- $ export PATH 00:01:24.547 11:02:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.547 11:02:17 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:24.547 11:02:17 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:24.547 11:02:17 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732096937.XXXXXX 00:01:24.547 11:02:17 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732096937.0nL0WQ 00:01:24.547 11:02:17 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:24.547 11:02:17 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:24.547 11:02:17 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:24.547 11:02:17 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:24.547 11:02:17 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:24.547 11:02:17 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:24.547 11:02:17 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:24.547 11:02:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.547 11:02:17 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:24.547 11:02:17 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:24.547 11:02:17 -- pm/common@17 -- $ local monitor 00:01:24.547 11:02:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.547 11:02:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.547 11:02:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.547 11:02:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.547 11:02:17 -- pm/common@21 -- $ date +%s 00:01:24.547 11:02:17 -- pm/common@21 -- $ date +%s 00:01:24.547 11:02:17 -- pm/common@25 -- $ sleep 1 00:01:24.547 11:02:17 -- pm/common@21 -- $ date +%s 00:01:24.547 11:02:17 -- pm/common@21 -- $ date +%s 00:01:24.547 11:02:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732096937 00:01:24.547 11:02:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732096937 00:01:24.547 11:02:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732096937 00:01:24.547 11:02:17 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732096937 00:01:24.547 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732096937_collect-cpu-load.pm.log 00:01:24.547 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732096937_collect-vmstat.pm.log 00:01:24.547 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732096937_collect-cpu-temp.pm.log 00:01:24.547 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732096937_collect-bmc-pm.bmc.pm.log 00:01:25.486 11:02:18 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:25.486 11:02:18 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:25.486 11:02:18 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:25.486 11:02:18 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:25.486 11:02:18 -- spdk/autobuild.sh@16 -- $ date -u 00:01:25.486 Wed Nov 20 10:02:18 AM UTC 2024 00:01:25.486 11:02:18 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:25.486 v25.01-pre-206-g4d3e9954d 00:01:25.486 11:02:18 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:25.486 11:02:18 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:25.486 11:02:18 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:25.486 11:02:18 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:25.486 11:02:18 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:25.486 11:02:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.746 ************************************ 00:01:25.746 START TEST ubsan 00:01:25.746 ************************************ 00:01:25.746 11:02:18 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:25.746 using ubsan 00:01:25.746 00:01:25.746 real 0m0.001s 00:01:25.746 user 0m0.000s 00:01:25.746 sys 0m0.001s 00:01:25.746 11:02:18 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:25.746 11:02:18 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:25.746 ************************************ 00:01:25.746 END TEST ubsan 00:01:25.746 ************************************ 00:01:25.746 11:02:18 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:25.746 11:02:18 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:25.746 11:02:18 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:25.746 11:02:18 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:25.746 11:02:18 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:25.746 11:02:18 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:25.746 11:02:18 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:25.746 11:02:18 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:25.746 11:02:18 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:25.746 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:25.746 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:26.316 Using 'verbs' RDMA provider 00:01:42.225 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:54.460 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:55.032 Creating mk/config.mk...done. 00:01:55.032 Creating mk/cc.flags.mk...done. 00:01:55.032 Type 'make' to build. 00:01:55.032 11:02:47 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:01:55.032 11:02:47 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:55.032 11:02:47 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:55.032 11:02:47 -- common/autotest_common.sh@10 -- $ set +x 00:01:55.032 ************************************ 00:01:55.032 START TEST make 00:01:55.032 ************************************ 00:01:55.032 11:02:47 make -- common/autotest_common.sh@1129 -- $ make -j144 00:01:55.605 make[1]: Nothing to be done for 'all'. 00:01:56.991 The Meson build system 00:01:56.991 Version: 1.5.0 00:01:56.991 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:56.991 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:56.991 Build type: native build 00:01:56.991 Project name: libvfio-user 00:01:56.991 Project version: 0.0.1 00:01:56.991 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:56.991 C linker for the host machine: cc ld.bfd 2.40-14 00:01:56.991 Host machine cpu family: x86_64 00:01:56.991 Host machine cpu: x86_64 00:01:56.991 Run-time dependency threads found: YES 00:01:56.991 Library dl found: YES 00:01:56.991 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:56.991 Run-time dependency json-c found: YES 0.17 00:01:56.991 Run-time dependency cmocka found: YES 1.1.7 00:01:56.991 Program pytest-3 found: NO 00:01:56.991 Program flake8 found: NO 00:01:56.991 Program misspell-fixer found: NO 00:01:56.991 Program restructuredtext-lint found: NO 00:01:56.991 Program valgrind found: YES (/usr/bin/valgrind) 00:01:56.991 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:56.991 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:56.991 Compiler for C supports arguments -Wwrite-strings: YES 00:01:56.991 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:56.991 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:56.991 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:56.991 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:56.991 Build targets in project: 8 00:01:56.991 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:56.991 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:56.991 00:01:56.991 libvfio-user 0.0.1 00:01:56.991 00:01:56.991 User defined options 00:01:56.991 buildtype : debug 00:01:56.991 default_library: shared 00:01:56.991 libdir : /usr/local/lib 00:01:56.991 00:01:56.991 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:57.251 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:57.511 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:57.511 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:57.511 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:57.511 [4/37] Compiling C object samples/null.p/null.c.o 00:01:57.511 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:57.511 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:57.511 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:57.511 [8/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:57.511 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:57.511 [10/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:57.511 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:57.511 [12/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:57.511 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:57.511 [14/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:57.511 [15/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:57.511 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:57.511 [17/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:57.511 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:57.511 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:57.511 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:57.511 [21/37] Compiling C object samples/server.p/server.c.o 00:01:57.511 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:57.511 [23/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:57.511 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:57.511 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:57.511 [26/37] Compiling C object samples/client.p/client.c.o 00:01:57.511 [27/37] Linking target samples/client 00:01:57.511 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:57.511 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:57.511 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:57.772 [31/37] Linking target test/unit_tests 00:01:57.772 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:57.772 [33/37] Linking target samples/server 00:01:57.772 [34/37] Linking target samples/null 00:01:57.772 [35/37] Linking target samples/gpio-pci-idio-16 00:01:57.772 [36/37] Linking target samples/lspci 00:01:57.772 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:57.772 INFO: autodetecting backend as ninja 00:01:57.772 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:58.033 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:58.294 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:58.294 ninja: no work to do. 00:02:03.671 The Meson build system 00:02:03.671 Version: 1.5.0 00:02:03.671 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:03.672 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:03.672 Build type: native build 00:02:03.672 Program cat found: YES (/usr/bin/cat) 00:02:03.672 Project name: DPDK 00:02:03.672 Project version: 24.03.0 00:02:03.672 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:03.672 C linker for the host machine: cc ld.bfd 2.40-14 00:02:03.672 Host machine cpu family: x86_64 00:02:03.672 Host machine cpu: x86_64 00:02:03.672 Message: ## Building in Developer Mode ## 00:02:03.672 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:03.672 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:03.672 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:03.672 Program python3 found: YES (/usr/bin/python3) 00:02:03.672 Program cat found: YES (/usr/bin/cat) 00:02:03.672 Compiler for C supports arguments -march=native: YES 00:02:03.672 Checking for size of "void *" : 8 00:02:03.672 Checking for size of "void *" : 8 (cached) 00:02:03.672 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:03.672 Library m found: YES 00:02:03.672 Library numa found: YES 00:02:03.672 Has header "numaif.h" : YES 00:02:03.672 Library fdt found: NO 00:02:03.672 Library execinfo found: NO 00:02:03.672 Has header "execinfo.h" : YES 00:02:03.672 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:03.672 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:03.672 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:03.672 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:03.672 Run-time dependency openssl found: YES 3.1.1 00:02:03.672 Run-time dependency libpcap found: YES 1.10.4 00:02:03.672 Has header "pcap.h" with dependency libpcap: YES 00:02:03.672 Compiler for C supports arguments -Wcast-qual: YES 00:02:03.672 Compiler for C supports arguments -Wdeprecated: YES 00:02:03.672 Compiler for C supports arguments -Wformat: YES 00:02:03.672 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:03.672 Compiler for C supports arguments -Wformat-security: NO 00:02:03.672 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:03.672 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:03.672 Compiler for C supports arguments -Wnested-externs: YES 00:02:03.672 Compiler for C supports arguments -Wold-style-definition: YES 00:02:03.672 Compiler for C supports arguments -Wpointer-arith: YES 00:02:03.672 Compiler for C supports arguments -Wsign-compare: YES 00:02:03.672 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:03.672 Compiler for C supports arguments -Wundef: YES 00:02:03.672 Compiler for C supports arguments -Wwrite-strings: YES 00:02:03.672 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:03.672 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:03.672 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:03.672 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:03.672 Program objdump found: YES (/usr/bin/objdump) 00:02:03.672 Compiler for C supports arguments -mavx512f: YES 00:02:03.672 Checking if "AVX512 checking" compiles: YES 00:02:03.672 Fetching value of define "__SSE4_2__" : 1 00:02:03.672 Fetching value of define "__AES__" : 1 00:02:03.672 Fetching value of define "__AVX__" : 1 00:02:03.672 Fetching value of define "__AVX2__" : 1 00:02:03.672 Fetching value of define "__AVX512BW__" : 1 00:02:03.672 Fetching value of define "__AVX512CD__" : 1 00:02:03.672 Fetching value of define "__AVX512DQ__" : 1 00:02:03.672 Fetching value of define "__AVX512F__" : 1 00:02:03.672 Fetching value of define "__AVX512VL__" : 1 00:02:03.672 Fetching value of define "__PCLMUL__" : 1 00:02:03.672 Fetching value of define "__RDRND__" : 1 00:02:03.672 Fetching value of define "__RDSEED__" : 1 00:02:03.672 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:03.672 Fetching value of define "__znver1__" : (undefined) 00:02:03.672 Fetching value of define "__znver2__" : (undefined) 00:02:03.672 Fetching value of define "__znver3__" : (undefined) 00:02:03.672 Fetching value of define "__znver4__" : (undefined) 00:02:03.672 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:03.672 Message: lib/log: Defining dependency "log" 00:02:03.672 Message: lib/kvargs: Defining dependency "kvargs" 00:02:03.672 Message: lib/telemetry: Defining dependency "telemetry" 00:02:03.672 Checking for function "getentropy" : NO 00:02:03.672 Message: lib/eal: Defining dependency "eal" 00:02:03.672 Message: lib/ring: Defining dependency "ring" 00:02:03.672 Message: lib/rcu: Defining dependency "rcu" 00:02:03.672 Message: lib/mempool: Defining dependency "mempool" 00:02:03.672 Message: lib/mbuf: Defining dependency "mbuf" 00:02:03.672 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:03.672 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:03.672 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:03.672 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:03.672 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:03.672 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:03.672 Compiler for C supports arguments -mpclmul: YES 00:02:03.672 Compiler for C supports arguments -maes: YES 00:02:03.672 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:03.672 Compiler for C supports arguments -mavx512bw: YES 00:02:03.672 Compiler for C supports arguments -mavx512dq: YES 00:02:03.672 Compiler for C supports arguments -mavx512vl: YES 00:02:03.672 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:03.672 Compiler for C supports arguments -mavx2: YES 00:02:03.672 Compiler for C supports arguments -mavx: YES 00:02:03.672 Message: lib/net: Defining dependency "net" 00:02:03.672 Message: lib/meter: Defining dependency "meter" 00:02:03.672 Message: lib/ethdev: Defining dependency "ethdev" 00:02:03.672 Message: lib/pci: Defining dependency "pci" 00:02:03.672 Message: lib/cmdline: Defining dependency "cmdline" 00:02:03.672 Message: lib/hash: Defining dependency "hash" 00:02:03.672 Message: lib/timer: Defining dependency "timer" 00:02:03.672 Message: lib/compressdev: Defining dependency "compressdev" 00:02:03.672 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:03.672 Message: lib/dmadev: Defining dependency "dmadev" 00:02:03.672 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:03.672 Message: lib/power: Defining dependency "power" 00:02:03.672 Message: lib/reorder: Defining dependency "reorder" 00:02:03.672 Message: lib/security: Defining dependency "security" 00:02:03.672 Has header "linux/userfaultfd.h" : YES 00:02:03.672 Has header "linux/vduse.h" : YES 00:02:03.672 Message: lib/vhost: Defining dependency "vhost" 00:02:03.672 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:03.672 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:03.672 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:03.672 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:03.672 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:03.672 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:03.672 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:03.672 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:03.672 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:03.672 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:03.672 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:03.672 Configuring doxy-api-html.conf using configuration 00:02:03.672 Configuring doxy-api-man.conf using configuration 00:02:03.672 Program mandb found: YES (/usr/bin/mandb) 00:02:03.672 Program sphinx-build found: NO 00:02:03.672 Configuring rte_build_config.h using configuration 00:02:03.672 Message: 00:02:03.672 ================= 00:02:03.672 Applications Enabled 00:02:03.672 ================= 00:02:03.672 00:02:03.672 apps: 00:02:03.672 00:02:03.672 00:02:03.672 Message: 00:02:03.672 ================= 00:02:03.672 Libraries Enabled 00:02:03.672 ================= 00:02:03.672 00:02:03.672 libs: 00:02:03.672 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:03.672 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:03.672 cryptodev, dmadev, power, reorder, security, vhost, 00:02:03.672 00:02:03.672 Message: 00:02:03.672 =============== 00:02:03.672 Drivers Enabled 00:02:03.672 =============== 00:02:03.672 00:02:03.672 common: 00:02:03.672 00:02:03.672 bus: 00:02:03.672 pci, vdev, 00:02:03.672 mempool: 00:02:03.672 ring, 00:02:03.672 dma: 00:02:03.672 00:02:03.672 net: 00:02:03.672 00:02:03.672 crypto: 00:02:03.672 00:02:03.672 compress: 00:02:03.672 00:02:03.672 vdpa: 00:02:03.672 00:02:03.672 00:02:03.672 Message: 00:02:03.672 ================= 00:02:03.672 Content Skipped 00:02:03.672 ================= 00:02:03.672 00:02:03.672 apps: 00:02:03.672 dumpcap: explicitly disabled via build config 00:02:03.672 graph: explicitly disabled via build config 00:02:03.672 pdump: explicitly disabled via build config 00:02:03.672 proc-info: explicitly disabled via build config 00:02:03.672 test-acl: explicitly disabled via build config 00:02:03.672 test-bbdev: explicitly disabled via build config 00:02:03.672 test-cmdline: explicitly disabled via build config 00:02:03.672 test-compress-perf: explicitly disabled via build config 00:02:03.672 test-crypto-perf: explicitly disabled via build config 00:02:03.672 test-dma-perf: explicitly disabled via build config 00:02:03.672 test-eventdev: explicitly disabled via build config 00:02:03.672 test-fib: explicitly disabled via build config 00:02:03.672 test-flow-perf: explicitly disabled via build config 00:02:03.672 test-gpudev: explicitly disabled via build config 00:02:03.672 test-mldev: explicitly disabled via build config 00:02:03.672 test-pipeline: explicitly disabled via build config 00:02:03.672 test-pmd: explicitly disabled via build config 00:02:03.672 test-regex: explicitly disabled via build config 00:02:03.672 test-sad: explicitly disabled via build config 00:02:03.672 test-security-perf: explicitly disabled via build config 00:02:03.672 00:02:03.673 libs: 00:02:03.673 argparse: explicitly disabled via build config 00:02:03.673 metrics: explicitly disabled via build config 00:02:03.673 acl: explicitly disabled via build config 00:02:03.673 bbdev: explicitly disabled via build config 00:02:03.673 bitratestats: explicitly disabled via build config 00:02:03.673 bpf: explicitly disabled via build config 00:02:03.673 cfgfile: explicitly disabled via build config 00:02:03.673 distributor: explicitly disabled via build config 00:02:03.673 efd: explicitly disabled via build config 00:02:03.673 eventdev: explicitly disabled via build config 00:02:03.673 dispatcher: explicitly disabled via build config 00:02:03.673 gpudev: explicitly disabled via build config 00:02:03.673 gro: explicitly disabled via build config 00:02:03.673 gso: explicitly disabled via build config 00:02:03.673 ip_frag: explicitly disabled via build config 00:02:03.673 jobstats: explicitly disabled via build config 00:02:03.673 latencystats: explicitly disabled via build config 00:02:03.673 lpm: explicitly disabled via build config 00:02:03.673 member: explicitly disabled via build config 00:02:03.673 pcapng: explicitly disabled via build config 00:02:03.673 rawdev: explicitly disabled via build config 00:02:03.673 regexdev: explicitly disabled via build config 00:02:03.673 mldev: explicitly disabled via build config 00:02:03.673 rib: explicitly disabled via build config 00:02:03.673 sched: explicitly disabled via build config 00:02:03.673 stack: explicitly disabled via build config 00:02:03.673 ipsec: explicitly disabled via build config 00:02:03.673 pdcp: explicitly disabled via build config 00:02:03.673 fib: explicitly disabled via build config 00:02:03.673 port: explicitly disabled via build config 00:02:03.673 pdump: explicitly disabled via build config 00:02:03.673 table: explicitly disabled via build config 00:02:03.673 pipeline: explicitly disabled via build config 00:02:03.673 graph: explicitly disabled via build config 00:02:03.673 node: explicitly disabled via build config 00:02:03.673 00:02:03.673 drivers: 00:02:03.673 common/cpt: not in enabled drivers build config 00:02:03.673 common/dpaax: not in enabled drivers build config 00:02:03.673 common/iavf: not in enabled drivers build config 00:02:03.673 common/idpf: not in enabled drivers build config 00:02:03.673 common/ionic: not in enabled drivers build config 00:02:03.673 common/mvep: not in enabled drivers build config 00:02:03.673 common/octeontx: not in enabled drivers build config 00:02:03.673 bus/auxiliary: not in enabled drivers build config 00:02:03.673 bus/cdx: not in enabled drivers build config 00:02:03.673 bus/dpaa: not in enabled drivers build config 00:02:03.673 bus/fslmc: not in enabled drivers build config 00:02:03.673 bus/ifpga: not in enabled drivers build config 00:02:03.673 bus/platform: not in enabled drivers build config 00:02:03.673 bus/uacce: not in enabled drivers build config 00:02:03.673 bus/vmbus: not in enabled drivers build config 00:02:03.673 common/cnxk: not in enabled drivers build config 00:02:03.673 common/mlx5: not in enabled drivers build config 00:02:03.673 common/nfp: not in enabled drivers build config 00:02:03.673 common/nitrox: not in enabled drivers build config 00:02:03.673 common/qat: not in enabled drivers build config 00:02:03.673 common/sfc_efx: not in enabled drivers build config 00:02:03.673 mempool/bucket: not in enabled drivers build config 00:02:03.673 mempool/cnxk: not in enabled drivers build config 00:02:03.673 mempool/dpaa: not in enabled drivers build config 00:02:03.673 mempool/dpaa2: not in enabled drivers build config 00:02:03.673 mempool/octeontx: not in enabled drivers build config 00:02:03.673 mempool/stack: not in enabled drivers build config 00:02:03.673 dma/cnxk: not in enabled drivers build config 00:02:03.673 dma/dpaa: not in enabled drivers build config 00:02:03.673 dma/dpaa2: not in enabled drivers build config 00:02:03.673 dma/hisilicon: not in enabled drivers build config 00:02:03.673 dma/idxd: not in enabled drivers build config 00:02:03.673 dma/ioat: not in enabled drivers build config 00:02:03.673 dma/skeleton: not in enabled drivers build config 00:02:03.673 net/af_packet: not in enabled drivers build config 00:02:03.673 net/af_xdp: not in enabled drivers build config 00:02:03.673 net/ark: not in enabled drivers build config 00:02:03.673 net/atlantic: not in enabled drivers build config 00:02:03.673 net/avp: not in enabled drivers build config 00:02:03.673 net/axgbe: not in enabled drivers build config 00:02:03.673 net/bnx2x: not in enabled drivers build config 00:02:03.673 net/bnxt: not in enabled drivers build config 00:02:03.673 net/bonding: not in enabled drivers build config 00:02:03.673 net/cnxk: not in enabled drivers build config 00:02:03.673 net/cpfl: not in enabled drivers build config 00:02:03.673 net/cxgbe: not in enabled drivers build config 00:02:03.673 net/dpaa: not in enabled drivers build config 00:02:03.673 net/dpaa2: not in enabled drivers build config 00:02:03.673 net/e1000: not in enabled drivers build config 00:02:03.673 net/ena: not in enabled drivers build config 00:02:03.673 net/enetc: not in enabled drivers build config 00:02:03.673 net/enetfec: not in enabled drivers build config 00:02:03.673 net/enic: not in enabled drivers build config 00:02:03.673 net/failsafe: not in enabled drivers build config 00:02:03.673 net/fm10k: not in enabled drivers build config 00:02:03.673 net/gve: not in enabled drivers build config 00:02:03.673 net/hinic: not in enabled drivers build config 00:02:03.673 net/hns3: not in enabled drivers build config 00:02:03.673 net/i40e: not in enabled drivers build config 00:02:03.673 net/iavf: not in enabled drivers build config 00:02:03.673 net/ice: not in enabled drivers build config 00:02:03.673 net/idpf: not in enabled drivers build config 00:02:03.673 net/igc: not in enabled drivers build config 00:02:03.673 net/ionic: not in enabled drivers build config 00:02:03.673 net/ipn3ke: not in enabled drivers build config 00:02:03.673 net/ixgbe: not in enabled drivers build config 00:02:03.673 net/mana: not in enabled drivers build config 00:02:03.673 net/memif: not in enabled drivers build config 00:02:03.673 net/mlx4: not in enabled drivers build config 00:02:03.673 net/mlx5: not in enabled drivers build config 00:02:03.673 net/mvneta: not in enabled drivers build config 00:02:03.673 net/mvpp2: not in enabled drivers build config 00:02:03.673 net/netvsc: not in enabled drivers build config 00:02:03.673 net/nfb: not in enabled drivers build config 00:02:03.673 net/nfp: not in enabled drivers build config 00:02:03.673 net/ngbe: not in enabled drivers build config 00:02:03.673 net/null: not in enabled drivers build config 00:02:03.673 net/octeontx: not in enabled drivers build config 00:02:03.673 net/octeon_ep: not in enabled drivers build config 00:02:03.673 net/pcap: not in enabled drivers build config 00:02:03.673 net/pfe: not in enabled drivers build config 00:02:03.673 net/qede: not in enabled drivers build config 00:02:03.673 net/ring: not in enabled drivers build config 00:02:03.673 net/sfc: not in enabled drivers build config 00:02:03.673 net/softnic: not in enabled drivers build config 00:02:03.673 net/tap: not in enabled drivers build config 00:02:03.673 net/thunderx: not in enabled drivers build config 00:02:03.673 net/txgbe: not in enabled drivers build config 00:02:03.673 net/vdev_netvsc: not in enabled drivers build config 00:02:03.673 net/vhost: not in enabled drivers build config 00:02:03.673 net/virtio: not in enabled drivers build config 00:02:03.673 net/vmxnet3: not in enabled drivers build config 00:02:03.673 raw/*: missing internal dependency, "rawdev" 00:02:03.673 crypto/armv8: not in enabled drivers build config 00:02:03.673 crypto/bcmfs: not in enabled drivers build config 00:02:03.673 crypto/caam_jr: not in enabled drivers build config 00:02:03.673 crypto/ccp: not in enabled drivers build config 00:02:03.673 crypto/cnxk: not in enabled drivers build config 00:02:03.673 crypto/dpaa_sec: not in enabled drivers build config 00:02:03.673 crypto/dpaa2_sec: not in enabled drivers build config 00:02:03.673 crypto/ipsec_mb: not in enabled drivers build config 00:02:03.673 crypto/mlx5: not in enabled drivers build config 00:02:03.673 crypto/mvsam: not in enabled drivers build config 00:02:03.673 crypto/nitrox: not in enabled drivers build config 00:02:03.673 crypto/null: not in enabled drivers build config 00:02:03.673 crypto/octeontx: not in enabled drivers build config 00:02:03.673 crypto/openssl: not in enabled drivers build config 00:02:03.673 crypto/scheduler: not in enabled drivers build config 00:02:03.673 crypto/uadk: not in enabled drivers build config 00:02:03.673 crypto/virtio: not in enabled drivers build config 00:02:03.673 compress/isal: not in enabled drivers build config 00:02:03.673 compress/mlx5: not in enabled drivers build config 00:02:03.673 compress/nitrox: not in enabled drivers build config 00:02:03.673 compress/octeontx: not in enabled drivers build config 00:02:03.673 compress/zlib: not in enabled drivers build config 00:02:03.673 regex/*: missing internal dependency, "regexdev" 00:02:03.673 ml/*: missing internal dependency, "mldev" 00:02:03.673 vdpa/ifc: not in enabled drivers build config 00:02:03.673 vdpa/mlx5: not in enabled drivers build config 00:02:03.673 vdpa/nfp: not in enabled drivers build config 00:02:03.673 vdpa/sfc: not in enabled drivers build config 00:02:03.673 event/*: missing internal dependency, "eventdev" 00:02:03.673 baseband/*: missing internal dependency, "bbdev" 00:02:03.673 gpu/*: missing internal dependency, "gpudev" 00:02:03.673 00:02:03.673 00:02:04.244 Build targets in project: 84 00:02:04.244 00:02:04.244 DPDK 24.03.0 00:02:04.244 00:02:04.244 User defined options 00:02:04.244 buildtype : debug 00:02:04.244 default_library : shared 00:02:04.244 libdir : lib 00:02:04.244 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:04.244 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:04.244 c_link_args : 00:02:04.244 cpu_instruction_set: native 00:02:04.244 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:02:04.244 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:02:04.244 enable_docs : false 00:02:04.244 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:04.244 enable_kmods : false 00:02:04.244 max_lcores : 128 00:02:04.244 tests : false 00:02:04.244 00:02:04.244 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:04.516 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:04.516 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:04.516 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:04.516 [3/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:04.778 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:04.778 [5/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:04.778 [6/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:04.778 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:04.778 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:04.778 [9/267] Linking static target lib/librte_kvargs.a 00:02:04.778 [10/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:04.779 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:04.779 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:04.779 [13/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:04.779 [14/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:04.779 [15/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:04.779 [16/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:04.779 [17/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:04.779 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:04.779 [19/267] Linking static target lib/librte_log.a 00:02:04.779 [20/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:04.779 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:04.779 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:04.779 [23/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:04.779 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:04.779 [25/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:04.779 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:04.779 [27/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:04.779 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:04.779 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:04.779 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:04.779 [31/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:05.038 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:05.038 [33/267] Linking static target lib/librte_pci.a 00:02:05.038 [34/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:05.038 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:05.038 [36/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:05.038 [37/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:05.038 [38/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:05.038 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:05.038 [40/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.298 [41/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.298 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:05.298 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:05.298 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:05.298 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:05.298 [46/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:05.298 [47/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:05.298 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:05.298 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:05.298 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:05.298 [51/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:05.298 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:05.298 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:05.298 [54/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:05.298 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:05.298 [56/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:05.298 [57/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:05.298 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:05.298 [59/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:05.298 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:05.298 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:05.298 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:05.298 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:05.298 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:05.298 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:05.298 [66/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:05.298 [67/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:05.298 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:05.298 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:05.298 [70/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:05.298 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:05.298 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:05.298 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:05.298 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:05.298 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:05.298 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:05.298 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:05.298 [78/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:05.298 [79/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:05.298 [80/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:05.298 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:05.298 [82/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:05.298 [83/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:05.298 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:05.298 [85/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:05.298 [86/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:05.298 [87/267] Linking static target lib/librte_meter.a 00:02:05.298 [88/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:05.298 [89/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:05.298 [90/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:05.298 [91/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:05.298 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:05.298 [93/267] Linking static target lib/librte_telemetry.a 00:02:05.298 [94/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:05.298 [95/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:05.298 [96/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:05.298 [97/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:05.298 [98/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:05.298 [99/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:05.298 [100/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:05.298 [101/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:05.298 [102/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:05.298 [103/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:05.298 [104/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:05.298 [105/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:05.298 [106/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:05.298 [107/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:05.298 [108/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:05.298 [109/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:05.298 [110/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:05.298 [111/267] Linking static target lib/librte_ring.a 00:02:05.298 [112/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:05.298 [113/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:05.298 [114/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:05.298 [115/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:05.298 [116/267] Linking static target lib/librte_cmdline.a 00:02:05.298 [117/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:05.298 [118/267] Linking static target lib/librte_timer.a 00:02:05.298 [119/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:05.298 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:05.298 [121/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:05.298 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:05.298 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:05.298 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:05.298 [125/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:05.298 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:05.298 [127/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:05.298 [128/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:05.298 [129/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:05.298 [130/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:05.298 [131/267] Linking static target lib/librte_mempool.a 00:02:05.298 [132/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:05.298 [133/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:05.298 [134/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:05.298 [135/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:05.298 [136/267] Linking static target lib/librte_dmadev.a 00:02:05.298 [137/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:05.298 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:05.298 [139/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:05.298 [140/267] Linking static target lib/librte_power.a 00:02:05.298 [141/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:05.298 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:05.298 [143/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:05.298 [144/267] Linking static target lib/librte_net.a 00:02:05.298 [145/267] Linking static target lib/librte_rcu.a 00:02:05.298 [146/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:05.298 [147/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:05.298 [148/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:05.298 [149/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:05.298 [150/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:05.298 [151/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:05.560 [152/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:05.560 [153/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:05.560 [154/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.560 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:05.560 [156/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:05.560 [157/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:05.560 [158/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:05.560 [159/267] Linking static target lib/librte_compressdev.a 00:02:05.560 [160/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:05.560 [161/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:05.560 [162/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:05.560 [163/267] Linking target lib/librte_log.so.24.1 00:02:05.560 [164/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:05.560 [165/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:05.560 [166/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:05.560 [167/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:05.560 [168/267] Linking static target lib/librte_reorder.a 00:02:05.560 [169/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:05.560 [170/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:05.560 [171/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:05.560 [172/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:05.560 [173/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:05.560 [174/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:05.560 [175/267] Linking static target lib/librte_security.a 00:02:05.560 [176/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:05.560 [177/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:05.560 [178/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:05.560 [179/267] Linking static target lib/librte_eal.a 00:02:05.560 [180/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:05.560 [181/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.560 [182/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:05.560 [183/267] Linking static target lib/librte_mbuf.a 00:02:05.560 [184/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:05.560 [185/267] Linking target lib/librte_kvargs.so.24.1 00:02:05.560 [186/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:05.560 [187/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:05.560 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:05.560 [189/267] Linking static target drivers/librte_bus_vdev.a 00:02:05.560 [190/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:05.819 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:05.819 [192/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.819 [193/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:05.819 [194/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:05.820 [195/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:05.820 [196/267] Linking static target lib/librte_hash.a 00:02:05.820 [197/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:05.820 [198/267] Linking static target drivers/librte_bus_pci.a 00:02:05.820 [199/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:05.820 [200/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:05.820 [201/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:05.820 [202/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.820 [203/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:05.820 [204/267] Linking static target drivers/librte_mempool_ring.a 00:02:05.820 [205/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.820 [206/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.820 [207/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:05.820 [208/267] Linking static target lib/librte_cryptodev.a 00:02:05.820 [209/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.820 [210/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:05.820 [211/267] Linking target lib/librte_telemetry.so.24.1 00:02:06.081 [212/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.081 [213/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.081 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:06.081 [215/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.343 [216/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.343 [217/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.343 [218/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:06.343 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:06.343 [220/267] Linking static target lib/librte_ethdev.a 00:02:06.343 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.343 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.602 [223/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.602 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.602 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.861 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.121 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:07.121 [228/267] Linking static target lib/librte_vhost.a 00:02:08.062 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.447 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.030 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.426 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.426 [233/267] Linking target lib/librte_eal.so.24.1 00:02:17.426 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:17.426 [235/267] Linking target lib/librte_ring.so.24.1 00:02:17.426 [236/267] Linking target lib/librte_timer.so.24.1 00:02:17.426 [237/267] Linking target lib/librte_meter.so.24.1 00:02:17.426 [238/267] Linking target lib/librte_pci.so.24.1 00:02:17.426 [239/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:17.426 [240/267] Linking target lib/librte_dmadev.so.24.1 00:02:17.426 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:17.426 [242/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:17.426 [243/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:17.426 [244/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:17.426 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:17.687 [246/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:17.687 [247/267] Linking target lib/librte_mempool.so.24.1 00:02:17.687 [248/267] Linking target lib/librte_rcu.so.24.1 00:02:17.687 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:17.687 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:17.687 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:17.687 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:17.949 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:17.949 [254/267] Linking target lib/librte_reorder.so.24.1 00:02:17.949 [255/267] Linking target lib/librte_net.so.24.1 00:02:17.949 [256/267] Linking target lib/librte_compressdev.so.24.1 00:02:17.949 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:17.949 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:17.949 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:18.210 [260/267] Linking target lib/librte_cmdline.so.24.1 00:02:18.210 [261/267] Linking target lib/librte_security.so.24.1 00:02:18.210 [262/267] Linking target lib/librte_hash.so.24.1 00:02:18.210 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:18.210 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:18.210 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:18.210 [266/267] Linking target lib/librte_power.so.24.1 00:02:18.470 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:18.470 INFO: autodetecting backend as ninja 00:02:18.470 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:22.676 CC lib/ut_mock/mock.o 00:02:22.676 CC lib/log/log.o 00:02:22.676 CC lib/ut/ut.o 00:02:22.676 CC lib/log/log_flags.o 00:02:22.676 CC lib/log/log_deprecated.o 00:02:22.676 LIB libspdk_ut.a 00:02:22.676 LIB libspdk_log.a 00:02:22.676 LIB libspdk_ut_mock.a 00:02:22.676 SO libspdk_ut.so.2.0 00:02:22.676 SO libspdk_log.so.7.1 00:02:22.676 SO libspdk_ut_mock.so.6.0 00:02:22.676 SYMLINK libspdk_ut.so 00:02:22.676 SYMLINK libspdk_log.so 00:02:22.676 SYMLINK libspdk_ut_mock.so 00:02:22.937 CC lib/util/base64.o 00:02:22.937 CC lib/util/bit_array.o 00:02:22.937 CC lib/dma/dma.o 00:02:22.937 CXX lib/trace_parser/trace.o 00:02:22.937 CC lib/util/cpuset.o 00:02:22.937 CC lib/ioat/ioat.o 00:02:22.937 CC lib/util/crc16.o 00:02:22.937 CC lib/util/crc32.o 00:02:22.937 CC lib/util/crc32c.o 00:02:22.937 CC lib/util/crc32_ieee.o 00:02:22.937 CC lib/util/crc64.o 00:02:22.937 CC lib/util/dif.o 00:02:22.937 CC lib/util/fd.o 00:02:22.937 CC lib/util/fd_group.o 00:02:22.937 CC lib/util/file.o 00:02:22.937 CC lib/util/hexlify.o 00:02:22.937 CC lib/util/iov.o 00:02:22.937 CC lib/util/math.o 00:02:22.937 CC lib/util/net.o 00:02:22.937 CC lib/util/pipe.o 00:02:22.937 CC lib/util/strerror_tls.o 00:02:22.937 CC lib/util/string.o 00:02:22.937 CC lib/util/uuid.o 00:02:22.937 CC lib/util/xor.o 00:02:22.937 CC lib/util/zipf.o 00:02:22.937 CC lib/util/md5.o 00:02:22.937 CC lib/vfio_user/host/vfio_user_pci.o 00:02:23.197 CC lib/vfio_user/host/vfio_user.o 00:02:23.197 LIB libspdk_dma.a 00:02:23.197 SO libspdk_dma.so.5.0 00:02:23.197 LIB libspdk_ioat.a 00:02:23.197 SO libspdk_ioat.so.7.0 00:02:23.197 SYMLINK libspdk_dma.so 00:02:23.197 SYMLINK libspdk_ioat.so 00:02:23.197 LIB libspdk_vfio_user.a 00:02:23.458 SO libspdk_vfio_user.so.5.0 00:02:23.458 LIB libspdk_util.a 00:02:23.458 SYMLINK libspdk_vfio_user.so 00:02:23.458 SO libspdk_util.so.10.1 00:02:23.458 LIB libspdk_trace_parser.a 00:02:23.458 SO libspdk_trace_parser.so.6.0 00:02:23.458 SYMLINK libspdk_util.so 00:02:23.720 SYMLINK libspdk_trace_parser.so 00:02:23.982 CC lib/json/json_parse.o 00:02:23.982 CC lib/json/json_util.o 00:02:23.982 CC lib/json/json_write.o 00:02:23.982 CC lib/env_dpdk/env.o 00:02:23.982 CC lib/rdma_utils/rdma_utils.o 00:02:23.982 CC lib/env_dpdk/memory.o 00:02:23.982 CC lib/idxd/idxd.o 00:02:23.982 CC lib/env_dpdk/pci.o 00:02:23.982 CC lib/vmd/vmd.o 00:02:23.982 CC lib/conf/conf.o 00:02:23.982 CC lib/idxd/idxd_user.o 00:02:23.982 CC lib/env_dpdk/init.o 00:02:23.982 CC lib/idxd/idxd_kernel.o 00:02:23.982 CC lib/vmd/led.o 00:02:23.982 CC lib/env_dpdk/threads.o 00:02:23.982 CC lib/env_dpdk/pci_ioat.o 00:02:23.982 CC lib/env_dpdk/pci_virtio.o 00:02:23.982 CC lib/env_dpdk/pci_vmd.o 00:02:23.982 CC lib/env_dpdk/pci_idxd.o 00:02:23.982 CC lib/env_dpdk/pci_event.o 00:02:23.982 CC lib/env_dpdk/sigbus_handler.o 00:02:23.982 CC lib/env_dpdk/pci_dpdk.o 00:02:23.982 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:23.982 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:24.243 LIB libspdk_conf.a 00:02:24.243 LIB libspdk_json.a 00:02:24.243 LIB libspdk_rdma_utils.a 00:02:24.243 SO libspdk_conf.so.6.0 00:02:24.243 SO libspdk_json.so.6.0 00:02:24.243 SO libspdk_rdma_utils.so.1.0 00:02:24.243 SYMLINK libspdk_conf.so 00:02:24.504 SYMLINK libspdk_json.so 00:02:24.504 SYMLINK libspdk_rdma_utils.so 00:02:24.504 LIB libspdk_idxd.a 00:02:24.504 SO libspdk_idxd.so.12.1 00:02:24.504 LIB libspdk_vmd.a 00:02:24.765 SO libspdk_vmd.so.6.0 00:02:24.765 SYMLINK libspdk_idxd.so 00:02:24.765 SYMLINK libspdk_vmd.so 00:02:24.765 CC lib/jsonrpc/jsonrpc_server.o 00:02:24.765 CC lib/jsonrpc/jsonrpc_client.o 00:02:24.765 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:24.765 CC lib/rdma_provider/common.o 00:02:24.765 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:24.765 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:25.026 LIB libspdk_rdma_provider.a 00:02:25.026 SO libspdk_rdma_provider.so.7.0 00:02:25.026 LIB libspdk_jsonrpc.a 00:02:25.026 SO libspdk_jsonrpc.so.6.0 00:02:25.026 SYMLINK libspdk_rdma_provider.so 00:02:25.026 SYMLINK libspdk_jsonrpc.so 00:02:25.286 LIB libspdk_env_dpdk.a 00:02:25.286 SO libspdk_env_dpdk.so.15.1 00:02:25.547 SYMLINK libspdk_env_dpdk.so 00:02:25.547 CC lib/rpc/rpc.o 00:02:25.807 LIB libspdk_rpc.a 00:02:25.807 SO libspdk_rpc.so.6.0 00:02:25.807 SYMLINK libspdk_rpc.so 00:02:26.068 CC lib/notify/notify.o 00:02:26.068 CC lib/notify/notify_rpc.o 00:02:26.068 CC lib/trace/trace.o 00:02:26.068 CC lib/keyring/keyring.o 00:02:26.068 CC lib/trace/trace_flags.o 00:02:26.068 CC lib/keyring/keyring_rpc.o 00:02:26.068 CC lib/trace/trace_rpc.o 00:02:26.327 LIB libspdk_notify.a 00:02:26.327 SO libspdk_notify.so.6.0 00:02:26.327 LIB libspdk_keyring.a 00:02:26.327 LIB libspdk_trace.a 00:02:26.327 SYMLINK libspdk_notify.so 00:02:26.587 SO libspdk_keyring.so.2.0 00:02:26.587 SO libspdk_trace.so.11.0 00:02:26.587 SYMLINK libspdk_keyring.so 00:02:26.587 SYMLINK libspdk_trace.so 00:02:26.847 CC lib/sock/sock.o 00:02:26.847 CC lib/sock/sock_rpc.o 00:02:26.847 CC lib/thread/thread.o 00:02:26.847 CC lib/thread/iobuf.o 00:02:27.418 LIB libspdk_sock.a 00:02:27.418 SO libspdk_sock.so.10.0 00:02:27.418 SYMLINK libspdk_sock.so 00:02:27.679 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:27.679 CC lib/nvme/nvme_ctrlr.o 00:02:27.679 CC lib/nvme/nvme_fabric.o 00:02:27.679 CC lib/nvme/nvme_ns_cmd.o 00:02:27.679 CC lib/nvme/nvme_ns.o 00:02:27.679 CC lib/nvme/nvme_pcie_common.o 00:02:27.679 CC lib/nvme/nvme_pcie.o 00:02:27.679 CC lib/nvme/nvme_qpair.o 00:02:27.679 CC lib/nvme/nvme.o 00:02:27.679 CC lib/nvme/nvme_quirks.o 00:02:27.679 CC lib/nvme/nvme_transport.o 00:02:27.679 CC lib/nvme/nvme_discovery.o 00:02:27.679 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:27.679 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:27.679 CC lib/nvme/nvme_tcp.o 00:02:27.679 CC lib/nvme/nvme_opal.o 00:02:27.679 CC lib/nvme/nvme_io_msg.o 00:02:27.679 CC lib/nvme/nvme_poll_group.o 00:02:27.679 CC lib/nvme/nvme_zns.o 00:02:27.679 CC lib/nvme/nvme_stubs.o 00:02:27.679 CC lib/nvme/nvme_auth.o 00:02:27.679 CC lib/nvme/nvme_cuse.o 00:02:27.679 CC lib/nvme/nvme_vfio_user.o 00:02:27.679 CC lib/nvme/nvme_rdma.o 00:02:28.251 LIB libspdk_thread.a 00:02:28.251 SO libspdk_thread.so.11.0 00:02:28.512 SYMLINK libspdk_thread.so 00:02:28.771 CC lib/accel/accel.o 00:02:28.771 CC lib/accel/accel_rpc.o 00:02:28.771 CC lib/accel/accel_sw.o 00:02:28.771 CC lib/init/json_config.o 00:02:28.771 CC lib/vfu_tgt/tgt_endpoint.o 00:02:28.771 CC lib/init/subsystem.o 00:02:28.771 CC lib/vfu_tgt/tgt_rpc.o 00:02:28.771 CC lib/init/subsystem_rpc.o 00:02:28.771 CC lib/init/rpc.o 00:02:28.771 CC lib/virtio/virtio.o 00:02:28.771 CC lib/blob/blobstore.o 00:02:28.772 CC lib/virtio/virtio_vhost_user.o 00:02:28.772 CC lib/blob/request.o 00:02:28.772 CC lib/blob/zeroes.o 00:02:28.772 CC lib/virtio/virtio_vfio_user.o 00:02:28.772 CC lib/fsdev/fsdev_io.o 00:02:28.772 CC lib/fsdev/fsdev.o 00:02:28.772 CC lib/blob/blob_bs_dev.o 00:02:28.772 CC lib/virtio/virtio_pci.o 00:02:28.772 CC lib/fsdev/fsdev_rpc.o 00:02:29.032 LIB libspdk_init.a 00:02:29.032 SO libspdk_init.so.6.0 00:02:29.032 LIB libspdk_vfu_tgt.a 00:02:29.032 LIB libspdk_virtio.a 00:02:29.032 SO libspdk_vfu_tgt.so.3.0 00:02:29.032 SYMLINK libspdk_init.so 00:02:29.293 SO libspdk_virtio.so.7.0 00:02:29.293 SYMLINK libspdk_vfu_tgt.so 00:02:29.293 SYMLINK libspdk_virtio.so 00:02:29.293 LIB libspdk_fsdev.a 00:02:29.554 SO libspdk_fsdev.so.2.0 00:02:29.554 CC lib/event/app.o 00:02:29.554 CC lib/event/reactor.o 00:02:29.554 CC lib/event/log_rpc.o 00:02:29.554 CC lib/event/app_rpc.o 00:02:29.554 CC lib/event/scheduler_static.o 00:02:29.554 SYMLINK libspdk_fsdev.so 00:02:29.814 LIB libspdk_accel.a 00:02:29.814 LIB libspdk_nvme.a 00:02:29.814 SO libspdk_accel.so.16.0 00:02:29.814 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:29.814 SYMLINK libspdk_accel.so 00:02:29.814 SO libspdk_nvme.so.15.0 00:02:29.814 LIB libspdk_event.a 00:02:30.075 SO libspdk_event.so.14.0 00:02:30.075 SYMLINK libspdk_event.so 00:02:30.075 SYMLINK libspdk_nvme.so 00:02:30.336 CC lib/bdev/bdev.o 00:02:30.336 CC lib/bdev/bdev_rpc.o 00:02:30.336 CC lib/bdev/bdev_zone.o 00:02:30.336 CC lib/bdev/part.o 00:02:30.336 CC lib/bdev/scsi_nvme.o 00:02:30.597 LIB libspdk_fuse_dispatcher.a 00:02:30.597 SO libspdk_fuse_dispatcher.so.1.0 00:02:30.597 SYMLINK libspdk_fuse_dispatcher.so 00:02:31.538 LIB libspdk_blob.a 00:02:31.538 SO libspdk_blob.so.11.0 00:02:31.538 SYMLINK libspdk_blob.so 00:02:32.109 CC lib/blobfs/blobfs.o 00:02:32.109 CC lib/blobfs/tree.o 00:02:32.109 CC lib/lvol/lvol.o 00:02:32.680 LIB libspdk_bdev.a 00:02:32.680 SO libspdk_bdev.so.17.0 00:02:32.680 LIB libspdk_blobfs.a 00:02:32.680 SYMLINK libspdk_bdev.so 00:02:32.680 SO libspdk_blobfs.so.10.0 00:02:32.680 LIB libspdk_lvol.a 00:02:32.680 SYMLINK libspdk_blobfs.so 00:02:32.941 SO libspdk_lvol.so.10.0 00:02:32.941 SYMLINK libspdk_lvol.so 00:02:32.941 CC lib/nbd/nbd.o 00:02:32.941 CC lib/nbd/nbd_rpc.o 00:02:33.204 CC lib/scsi/dev.o 00:02:33.204 CC lib/nvmf/ctrlr.o 00:02:33.204 CC lib/scsi/lun.o 00:02:33.204 CC lib/scsi/port.o 00:02:33.204 CC lib/nvmf/ctrlr_discovery.o 00:02:33.204 CC lib/scsi/scsi.o 00:02:33.204 CC lib/ublk/ublk.o 00:02:33.204 CC lib/nvmf/ctrlr_bdev.o 00:02:33.204 CC lib/scsi/scsi_bdev.o 00:02:33.204 CC lib/ublk/ublk_rpc.o 00:02:33.204 CC lib/nvmf/subsystem.o 00:02:33.204 CC lib/scsi/scsi_pr.o 00:02:33.204 CC lib/ftl/ftl_core.o 00:02:33.204 CC lib/nvmf/nvmf.o 00:02:33.204 CC lib/scsi/scsi_rpc.o 00:02:33.204 CC lib/nvmf/nvmf_rpc.o 00:02:33.204 CC lib/scsi/task.o 00:02:33.204 CC lib/ftl/ftl_init.o 00:02:33.204 CC lib/nvmf/transport.o 00:02:33.204 CC lib/ftl/ftl_layout.o 00:02:33.204 CC lib/nvmf/tcp.o 00:02:33.204 CC lib/ftl/ftl_debug.o 00:02:33.204 CC lib/ftl/ftl_io.o 00:02:33.204 CC lib/nvmf/stubs.o 00:02:33.204 CC lib/nvmf/mdns_server.o 00:02:33.204 CC lib/nvmf/vfio_user.o 00:02:33.204 CC lib/ftl/ftl_sb.o 00:02:33.204 CC lib/nvmf/auth.o 00:02:33.204 CC lib/ftl/ftl_l2p.o 00:02:33.204 CC lib/nvmf/rdma.o 00:02:33.204 CC lib/ftl/ftl_l2p_flat.o 00:02:33.204 CC lib/ftl/ftl_nv_cache.o 00:02:33.204 CC lib/ftl/ftl_band.o 00:02:33.204 CC lib/ftl/ftl_band_ops.o 00:02:33.204 CC lib/ftl/ftl_writer.o 00:02:33.204 CC lib/ftl/ftl_rq.o 00:02:33.204 CC lib/ftl/ftl_reloc.o 00:02:33.204 CC lib/ftl/ftl_p2l.o 00:02:33.204 CC lib/ftl/ftl_l2p_cache.o 00:02:33.204 CC lib/ftl/ftl_p2l_log.o 00:02:33.204 CC lib/ftl/mngt/ftl_mngt.o 00:02:33.204 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:33.204 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:33.204 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:33.204 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:33.204 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:33.204 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:33.204 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:33.204 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:33.204 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:33.204 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:33.204 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:33.204 CC lib/ftl/utils/ftl_conf.o 00:02:33.204 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:33.204 CC lib/ftl/utils/ftl_md.o 00:02:33.204 CC lib/ftl/utils/ftl_mempool.o 00:02:33.204 CC lib/ftl/utils/ftl_property.o 00:02:33.204 CC lib/ftl/utils/ftl_bitmap.o 00:02:33.204 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:33.204 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:33.204 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:33.204 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:33.204 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:33.204 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:33.204 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:33.204 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:33.204 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:33.204 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:33.204 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:33.204 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:33.204 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:33.204 CC lib/ftl/base/ftl_base_dev.o 00:02:33.204 CC lib/ftl/ftl_trace.o 00:02:33.204 CC lib/ftl/base/ftl_base_bdev.o 00:02:33.776 LIB libspdk_nbd.a 00:02:33.776 SO libspdk_nbd.so.7.0 00:02:33.776 SYMLINK libspdk_nbd.so 00:02:34.037 LIB libspdk_scsi.a 00:02:34.037 LIB libspdk_ublk.a 00:02:34.037 SO libspdk_ublk.so.3.0 00:02:34.037 SO libspdk_scsi.so.9.0 00:02:34.037 SYMLINK libspdk_ublk.so 00:02:34.037 SYMLINK libspdk_scsi.so 00:02:34.298 LIB libspdk_ftl.a 00:02:34.558 SO libspdk_ftl.so.9.0 00:02:34.558 CC lib/vhost/vhost.o 00:02:34.558 CC lib/vhost/vhost_rpc.o 00:02:34.558 CC lib/vhost/vhost_scsi.o 00:02:34.558 CC lib/vhost/vhost_blk.o 00:02:34.558 CC lib/vhost/rte_vhost_user.o 00:02:34.558 CC lib/iscsi/conn.o 00:02:34.558 CC lib/iscsi/init_grp.o 00:02:34.558 CC lib/iscsi/iscsi.o 00:02:34.558 CC lib/iscsi/param.o 00:02:34.558 CC lib/iscsi/portal_grp.o 00:02:34.558 CC lib/iscsi/tgt_node.o 00:02:34.558 CC lib/iscsi/iscsi_subsystem.o 00:02:34.558 CC lib/iscsi/iscsi_rpc.o 00:02:34.558 CC lib/iscsi/task.o 00:02:34.818 SYMLINK libspdk_ftl.so 00:02:35.391 LIB libspdk_nvmf.a 00:02:35.391 SO libspdk_nvmf.so.20.0 00:02:35.391 LIB libspdk_vhost.a 00:02:35.391 SYMLINK libspdk_nvmf.so 00:02:35.652 SO libspdk_vhost.so.8.0 00:02:35.652 SYMLINK libspdk_vhost.so 00:02:35.652 LIB libspdk_iscsi.a 00:02:35.652 SO libspdk_iscsi.so.8.0 00:02:35.912 SYMLINK libspdk_iscsi.so 00:02:36.485 CC module/env_dpdk/env_dpdk_rpc.o 00:02:36.485 CC module/vfu_device/vfu_virtio.o 00:02:36.485 CC module/vfu_device/vfu_virtio_blk.o 00:02:36.485 CC module/vfu_device/vfu_virtio_scsi.o 00:02:36.485 CC module/vfu_device/vfu_virtio_rpc.o 00:02:36.485 CC module/vfu_device/vfu_virtio_fs.o 00:02:36.745 CC module/sock/posix/posix.o 00:02:36.745 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:36.745 CC module/accel/error/accel_error.o 00:02:36.745 LIB libspdk_env_dpdk_rpc.a 00:02:36.745 CC module/accel/error/accel_error_rpc.o 00:02:36.745 CC module/scheduler/gscheduler/gscheduler.o 00:02:36.745 CC module/blob/bdev/blob_bdev.o 00:02:36.745 CC module/keyring/file/keyring.o 00:02:36.745 CC module/accel/ioat/accel_ioat.o 00:02:36.745 CC module/accel/dsa/accel_dsa.o 00:02:36.745 CC module/accel/iaa/accel_iaa_rpc.o 00:02:36.745 CC module/accel/iaa/accel_iaa.o 00:02:36.745 CC module/keyring/file/keyring_rpc.o 00:02:36.745 CC module/accel/ioat/accel_ioat_rpc.o 00:02:36.745 CC module/accel/dsa/accel_dsa_rpc.o 00:02:36.745 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:36.745 CC module/keyring/linux/keyring.o 00:02:36.745 CC module/keyring/linux/keyring_rpc.o 00:02:36.745 CC module/fsdev/aio/fsdev_aio.o 00:02:36.745 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:36.745 CC module/fsdev/aio/linux_aio_mgr.o 00:02:36.745 SO libspdk_env_dpdk_rpc.so.6.0 00:02:36.745 SYMLINK libspdk_env_dpdk_rpc.so 00:02:37.006 LIB libspdk_scheduler_gscheduler.a 00:02:37.006 LIB libspdk_keyring_linux.a 00:02:37.006 LIB libspdk_keyring_file.a 00:02:37.006 SO libspdk_scheduler_gscheduler.so.4.0 00:02:37.006 LIB libspdk_scheduler_dynamic.a 00:02:37.006 LIB libspdk_scheduler_dpdk_governor.a 00:02:37.006 SO libspdk_keyring_linux.so.1.0 00:02:37.006 LIB libspdk_accel_error.a 00:02:37.006 LIB libspdk_accel_ioat.a 00:02:37.006 SO libspdk_keyring_file.so.2.0 00:02:37.006 LIB libspdk_accel_iaa.a 00:02:37.006 SO libspdk_scheduler_dynamic.so.4.0 00:02:37.006 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:37.006 SYMLINK libspdk_scheduler_gscheduler.so 00:02:37.006 SO libspdk_accel_error.so.2.0 00:02:37.006 SO libspdk_accel_ioat.so.6.0 00:02:37.006 SYMLINK libspdk_keyring_linux.so 00:02:37.006 LIB libspdk_accel_dsa.a 00:02:37.006 SO libspdk_accel_iaa.so.3.0 00:02:37.006 LIB libspdk_blob_bdev.a 00:02:37.006 SYMLINK libspdk_keyring_file.so 00:02:37.006 SYMLINK libspdk_scheduler_dynamic.so 00:02:37.006 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:37.006 SYMLINK libspdk_accel_error.so 00:02:37.006 SO libspdk_accel_dsa.so.5.0 00:02:37.006 SO libspdk_blob_bdev.so.11.0 00:02:37.006 SYMLINK libspdk_accel_ioat.so 00:02:37.006 SYMLINK libspdk_accel_iaa.so 00:02:37.006 LIB libspdk_vfu_device.a 00:02:37.006 SYMLINK libspdk_accel_dsa.so 00:02:37.006 SYMLINK libspdk_blob_bdev.so 00:02:37.267 SO libspdk_vfu_device.so.3.0 00:02:37.267 SYMLINK libspdk_vfu_device.so 00:02:37.267 LIB libspdk_fsdev_aio.a 00:02:37.267 LIB libspdk_sock_posix.a 00:02:37.527 SO libspdk_fsdev_aio.so.1.0 00:02:37.527 SO libspdk_sock_posix.so.6.0 00:02:37.527 SYMLINK libspdk_fsdev_aio.so 00:02:37.527 SYMLINK libspdk_sock_posix.so 00:02:37.787 CC module/bdev/delay/vbdev_delay.o 00:02:37.787 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:37.787 CC module/blobfs/bdev/blobfs_bdev.o 00:02:37.787 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:37.787 CC module/bdev/lvol/vbdev_lvol.o 00:02:37.787 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:37.787 CC module/bdev/gpt/gpt.o 00:02:37.787 CC module/bdev/gpt/vbdev_gpt.o 00:02:37.787 CC module/bdev/nvme/bdev_nvme.o 00:02:37.787 CC module/bdev/raid/bdev_raid.o 00:02:37.787 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:37.787 CC module/bdev/null/bdev_null.o 00:02:37.787 CC module/bdev/passthru/vbdev_passthru.o 00:02:37.787 CC module/bdev/error/vbdev_error.o 00:02:37.787 CC module/bdev/raid/bdev_raid_rpc.o 00:02:37.787 CC module/bdev/null/bdev_null_rpc.o 00:02:37.787 CC module/bdev/nvme/nvme_rpc.o 00:02:37.787 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:37.787 CC module/bdev/raid/bdev_raid_sb.o 00:02:37.787 CC module/bdev/error/vbdev_error_rpc.o 00:02:37.787 CC module/bdev/raid/raid0.o 00:02:37.787 CC module/bdev/nvme/bdev_mdns_client.o 00:02:37.787 CC module/bdev/nvme/vbdev_opal.o 00:02:37.787 CC module/bdev/raid/concat.o 00:02:37.787 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:37.787 CC module/bdev/raid/raid1.o 00:02:37.787 CC module/bdev/split/vbdev_split.o 00:02:37.787 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:37.787 CC module/bdev/split/vbdev_split_rpc.o 00:02:37.787 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:37.787 CC module/bdev/malloc/bdev_malloc.o 00:02:37.787 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:37.787 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:37.787 CC module/bdev/iscsi/bdev_iscsi.o 00:02:37.787 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:37.787 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:37.787 CC module/bdev/ftl/bdev_ftl.o 00:02:37.787 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:37.787 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:37.787 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:37.787 CC module/bdev/aio/bdev_aio.o 00:02:37.787 CC module/bdev/aio/bdev_aio_rpc.o 00:02:38.048 LIB libspdk_blobfs_bdev.a 00:02:38.048 SO libspdk_blobfs_bdev.so.6.0 00:02:38.048 LIB libspdk_bdev_gpt.a 00:02:38.048 LIB libspdk_bdev_split.a 00:02:38.048 LIB libspdk_bdev_error.a 00:02:38.048 LIB libspdk_bdev_null.a 00:02:38.048 LIB libspdk_bdev_passthru.a 00:02:38.049 SYMLINK libspdk_blobfs_bdev.so 00:02:38.049 SO libspdk_bdev_gpt.so.6.0 00:02:38.049 SO libspdk_bdev_split.so.6.0 00:02:38.049 SO libspdk_bdev_error.so.6.0 00:02:38.049 SO libspdk_bdev_null.so.6.0 00:02:38.049 SO libspdk_bdev_passthru.so.6.0 00:02:38.049 LIB libspdk_bdev_ftl.a 00:02:38.049 LIB libspdk_bdev_delay.a 00:02:38.049 LIB libspdk_bdev_zone_block.a 00:02:38.309 SO libspdk_bdev_ftl.so.6.0 00:02:38.309 SYMLINK libspdk_bdev_error.so 00:02:38.309 LIB libspdk_bdev_iscsi.a 00:02:38.309 LIB libspdk_bdev_aio.a 00:02:38.309 SYMLINK libspdk_bdev_gpt.so 00:02:38.309 SO libspdk_bdev_delay.so.6.0 00:02:38.309 LIB libspdk_bdev_malloc.a 00:02:38.309 SYMLINK libspdk_bdev_split.so 00:02:38.309 SO libspdk_bdev_zone_block.so.6.0 00:02:38.309 SYMLINK libspdk_bdev_null.so 00:02:38.309 SYMLINK libspdk_bdev_passthru.so 00:02:38.309 SO libspdk_bdev_iscsi.so.6.0 00:02:38.309 SO libspdk_bdev_aio.so.6.0 00:02:38.309 SO libspdk_bdev_malloc.so.6.0 00:02:38.309 SYMLINK libspdk_bdev_ftl.so 00:02:38.309 SYMLINK libspdk_bdev_delay.so 00:02:38.309 SYMLINK libspdk_bdev_iscsi.so 00:02:38.309 SYMLINK libspdk_bdev_zone_block.so 00:02:38.309 LIB libspdk_bdev_lvol.a 00:02:38.309 SYMLINK libspdk_bdev_aio.so 00:02:38.309 SYMLINK libspdk_bdev_malloc.so 00:02:38.309 SO libspdk_bdev_lvol.so.6.0 00:02:38.309 LIB libspdk_bdev_virtio.a 00:02:38.309 SO libspdk_bdev_virtio.so.6.0 00:02:38.309 SYMLINK libspdk_bdev_lvol.so 00:02:38.570 SYMLINK libspdk_bdev_virtio.so 00:02:38.830 LIB libspdk_bdev_raid.a 00:02:38.830 SO libspdk_bdev_raid.so.6.0 00:02:38.830 SYMLINK libspdk_bdev_raid.so 00:02:40.213 LIB libspdk_bdev_nvme.a 00:02:40.213 SO libspdk_bdev_nvme.so.7.1 00:02:40.213 SYMLINK libspdk_bdev_nvme.so 00:02:41.156 CC module/event/subsystems/iobuf/iobuf.o 00:02:41.156 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:41.156 CC module/event/subsystems/vmd/vmd.o 00:02:41.156 CC module/event/subsystems/scheduler/scheduler.o 00:02:41.156 CC module/event/subsystems/sock/sock.o 00:02:41.156 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:41.156 CC module/event/subsystems/keyring/keyring.o 00:02:41.156 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:41.156 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:41.156 CC module/event/subsystems/fsdev/fsdev.o 00:02:41.156 LIB libspdk_event_sock.a 00:02:41.156 LIB libspdk_event_vfu_tgt.a 00:02:41.156 LIB libspdk_event_keyring.a 00:02:41.156 LIB libspdk_event_iobuf.a 00:02:41.156 LIB libspdk_event_scheduler.a 00:02:41.156 LIB libspdk_event_vhost_blk.a 00:02:41.156 LIB libspdk_event_vmd.a 00:02:41.156 LIB libspdk_event_fsdev.a 00:02:41.156 SO libspdk_event_sock.so.5.0 00:02:41.156 SO libspdk_event_vfu_tgt.so.3.0 00:02:41.156 SO libspdk_event_vhost_blk.so.3.0 00:02:41.156 SO libspdk_event_keyring.so.1.0 00:02:41.156 SO libspdk_event_fsdev.so.1.0 00:02:41.156 SO libspdk_event_scheduler.so.4.0 00:02:41.156 SO libspdk_event_iobuf.so.3.0 00:02:41.156 SO libspdk_event_vmd.so.6.0 00:02:41.156 SYMLINK libspdk_event_sock.so 00:02:41.418 SYMLINK libspdk_event_vfu_tgt.so 00:02:41.418 SYMLINK libspdk_event_vhost_blk.so 00:02:41.418 SYMLINK libspdk_event_keyring.so 00:02:41.418 SYMLINK libspdk_event_fsdev.so 00:02:41.418 SYMLINK libspdk_event_scheduler.so 00:02:41.418 SYMLINK libspdk_event_iobuf.so 00:02:41.418 SYMLINK libspdk_event_vmd.so 00:02:41.679 CC module/event/subsystems/accel/accel.o 00:02:41.940 LIB libspdk_event_accel.a 00:02:41.940 SO libspdk_event_accel.so.6.0 00:02:41.940 SYMLINK libspdk_event_accel.so 00:02:42.200 CC module/event/subsystems/bdev/bdev.o 00:02:42.461 LIB libspdk_event_bdev.a 00:02:42.461 SO libspdk_event_bdev.so.6.0 00:02:42.461 SYMLINK libspdk_event_bdev.so 00:02:43.033 CC module/event/subsystems/nbd/nbd.o 00:02:43.033 CC module/event/subsystems/ublk/ublk.o 00:02:43.033 CC module/event/subsystems/scsi/scsi.o 00:02:43.033 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:43.033 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:43.033 LIB libspdk_event_nbd.a 00:02:43.033 LIB libspdk_event_ublk.a 00:02:43.033 LIB libspdk_event_scsi.a 00:02:43.033 SO libspdk_event_nbd.so.6.0 00:02:43.033 SO libspdk_event_ublk.so.3.0 00:02:43.294 SO libspdk_event_scsi.so.6.0 00:02:43.294 SYMLINK libspdk_event_nbd.so 00:02:43.294 LIB libspdk_event_nvmf.a 00:02:43.294 SYMLINK libspdk_event_ublk.so 00:02:43.294 SYMLINK libspdk_event_scsi.so 00:02:43.294 SO libspdk_event_nvmf.so.6.0 00:02:43.294 SYMLINK libspdk_event_nvmf.so 00:02:43.554 CC module/event/subsystems/iscsi/iscsi.o 00:02:43.554 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:43.815 LIB libspdk_event_vhost_scsi.a 00:02:43.815 LIB libspdk_event_iscsi.a 00:02:43.815 SO libspdk_event_vhost_scsi.so.3.0 00:02:43.815 SO libspdk_event_iscsi.so.6.0 00:02:43.815 SYMLINK libspdk_event_vhost_scsi.so 00:02:43.815 SYMLINK libspdk_event_iscsi.so 00:02:44.076 SO libspdk.so.6.0 00:02:44.076 SYMLINK libspdk.so 00:02:44.650 CC app/trace_record/trace_record.o 00:02:44.650 CC app/spdk_lspci/spdk_lspci.o 00:02:44.650 CXX app/trace/trace.o 00:02:44.650 CC test/rpc_client/rpc_client_test.o 00:02:44.650 CC app/spdk_nvme_discover/discovery_aer.o 00:02:44.650 CC app/spdk_nvme_identify/identify.o 00:02:44.650 CC app/spdk_top/spdk_top.o 00:02:44.650 TEST_HEADER include/spdk/accel_module.h 00:02:44.650 CC app/spdk_nvme_perf/perf.o 00:02:44.650 TEST_HEADER include/spdk/accel.h 00:02:44.650 TEST_HEADER include/spdk/assert.h 00:02:44.650 TEST_HEADER include/spdk/barrier.h 00:02:44.650 TEST_HEADER include/spdk/bdev.h 00:02:44.650 TEST_HEADER include/spdk/base64.h 00:02:44.650 TEST_HEADER include/spdk/bdev_zone.h 00:02:44.650 TEST_HEADER include/spdk/bdev_module.h 00:02:44.650 TEST_HEADER include/spdk/bit_array.h 00:02:44.650 TEST_HEADER include/spdk/bit_pool.h 00:02:44.650 TEST_HEADER include/spdk/blob_bdev.h 00:02:44.650 TEST_HEADER include/spdk/blob.h 00:02:44.650 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:44.650 TEST_HEADER include/spdk/blobfs.h 00:02:44.650 TEST_HEADER include/spdk/conf.h 00:02:44.650 TEST_HEADER include/spdk/config.h 00:02:44.650 TEST_HEADER include/spdk/cpuset.h 00:02:44.650 TEST_HEADER include/spdk/crc16.h 00:02:44.650 TEST_HEADER include/spdk/crc64.h 00:02:44.650 TEST_HEADER include/spdk/crc32.h 00:02:44.650 TEST_HEADER include/spdk/dif.h 00:02:44.650 TEST_HEADER include/spdk/dma.h 00:02:44.650 TEST_HEADER include/spdk/endian.h 00:02:44.650 TEST_HEADER include/spdk/env.h 00:02:44.650 TEST_HEADER include/spdk/env_dpdk.h 00:02:44.650 TEST_HEADER include/spdk/event.h 00:02:44.650 TEST_HEADER include/spdk/fd_group.h 00:02:44.650 TEST_HEADER include/spdk/file.h 00:02:44.650 TEST_HEADER include/spdk/fd.h 00:02:44.650 TEST_HEADER include/spdk/fsdev.h 00:02:44.650 TEST_HEADER include/spdk/fsdev_module.h 00:02:44.650 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:44.650 TEST_HEADER include/spdk/ftl.h 00:02:44.650 TEST_HEADER include/spdk/hexlify.h 00:02:44.650 TEST_HEADER include/spdk/gpt_spec.h 00:02:44.650 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:44.650 TEST_HEADER include/spdk/histogram_data.h 00:02:44.650 TEST_HEADER include/spdk/idxd.h 00:02:44.650 TEST_HEADER include/spdk/idxd_spec.h 00:02:44.650 TEST_HEADER include/spdk/init.h 00:02:44.650 TEST_HEADER include/spdk/ioat.h 00:02:44.650 TEST_HEADER include/spdk/ioat_spec.h 00:02:44.650 CC app/spdk_dd/spdk_dd.o 00:02:44.650 TEST_HEADER include/spdk/iscsi_spec.h 00:02:44.650 CC app/nvmf_tgt/nvmf_main.o 00:02:44.650 CC app/iscsi_tgt/iscsi_tgt.o 00:02:44.650 TEST_HEADER include/spdk/jsonrpc.h 00:02:44.650 TEST_HEADER include/spdk/json.h 00:02:44.651 TEST_HEADER include/spdk/keyring.h 00:02:44.651 TEST_HEADER include/spdk/keyring_module.h 00:02:44.651 TEST_HEADER include/spdk/likely.h 00:02:44.651 TEST_HEADER include/spdk/log.h 00:02:44.651 TEST_HEADER include/spdk/md5.h 00:02:44.651 TEST_HEADER include/spdk/lvol.h 00:02:44.651 TEST_HEADER include/spdk/memory.h 00:02:44.651 TEST_HEADER include/spdk/mmio.h 00:02:44.651 TEST_HEADER include/spdk/nbd.h 00:02:44.651 TEST_HEADER include/spdk/net.h 00:02:44.651 TEST_HEADER include/spdk/notify.h 00:02:44.651 TEST_HEADER include/spdk/nvme.h 00:02:44.651 TEST_HEADER include/spdk/nvme_intel.h 00:02:44.651 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:44.651 TEST_HEADER include/spdk/nvme_spec.h 00:02:44.651 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:44.651 CC app/spdk_tgt/spdk_tgt.o 00:02:44.651 TEST_HEADER include/spdk/nvme_zns.h 00:02:44.651 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:44.651 TEST_HEADER include/spdk/nvmf.h 00:02:44.651 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:44.651 TEST_HEADER include/spdk/nvmf_spec.h 00:02:44.651 TEST_HEADER include/spdk/nvmf_transport.h 00:02:44.651 TEST_HEADER include/spdk/opal.h 00:02:44.651 TEST_HEADER include/spdk/opal_spec.h 00:02:44.651 TEST_HEADER include/spdk/pci_ids.h 00:02:44.651 TEST_HEADER include/spdk/pipe.h 00:02:44.651 TEST_HEADER include/spdk/queue.h 00:02:44.651 TEST_HEADER include/spdk/reduce.h 00:02:44.651 TEST_HEADER include/spdk/scheduler.h 00:02:44.651 TEST_HEADER include/spdk/rpc.h 00:02:44.651 TEST_HEADER include/spdk/scsi.h 00:02:44.651 TEST_HEADER include/spdk/scsi_spec.h 00:02:44.651 TEST_HEADER include/spdk/sock.h 00:02:44.651 TEST_HEADER include/spdk/stdinc.h 00:02:44.651 TEST_HEADER include/spdk/string.h 00:02:44.651 TEST_HEADER include/spdk/thread.h 00:02:44.651 TEST_HEADER include/spdk/trace.h 00:02:44.651 TEST_HEADER include/spdk/trace_parser.h 00:02:44.651 TEST_HEADER include/spdk/tree.h 00:02:44.651 TEST_HEADER include/spdk/ublk.h 00:02:44.651 TEST_HEADER include/spdk/util.h 00:02:44.651 TEST_HEADER include/spdk/uuid.h 00:02:44.651 TEST_HEADER include/spdk/version.h 00:02:44.651 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:44.651 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:44.651 TEST_HEADER include/spdk/vhost.h 00:02:44.651 TEST_HEADER include/spdk/vmd.h 00:02:44.651 TEST_HEADER include/spdk/xor.h 00:02:44.651 TEST_HEADER include/spdk/zipf.h 00:02:44.651 CXX test/cpp_headers/accel.o 00:02:44.651 CXX test/cpp_headers/accel_module.o 00:02:44.651 CXX test/cpp_headers/assert.o 00:02:44.651 CXX test/cpp_headers/barrier.o 00:02:44.651 CXX test/cpp_headers/bdev.o 00:02:44.651 CXX test/cpp_headers/base64.o 00:02:44.651 CXX test/cpp_headers/bdev_module.o 00:02:44.651 CXX test/cpp_headers/bdev_zone.o 00:02:44.651 CXX test/cpp_headers/bit_array.o 00:02:44.651 CXX test/cpp_headers/bit_pool.o 00:02:44.651 CXX test/cpp_headers/blobfs_bdev.o 00:02:44.651 CXX test/cpp_headers/blobfs.o 00:02:44.651 CXX test/cpp_headers/blob_bdev.o 00:02:44.651 CXX test/cpp_headers/blob.o 00:02:44.651 CXX test/cpp_headers/conf.o 00:02:44.651 CXX test/cpp_headers/config.o 00:02:44.651 CXX test/cpp_headers/cpuset.o 00:02:44.651 CXX test/cpp_headers/crc16.o 00:02:44.651 CXX test/cpp_headers/crc64.o 00:02:44.651 CXX test/cpp_headers/crc32.o 00:02:44.651 CXX test/cpp_headers/dif.o 00:02:44.651 CXX test/cpp_headers/dma.o 00:02:44.651 CXX test/cpp_headers/endian.o 00:02:44.651 CXX test/cpp_headers/env_dpdk.o 00:02:44.651 CXX test/cpp_headers/env.o 00:02:44.651 CXX test/cpp_headers/fd_group.o 00:02:44.651 CXX test/cpp_headers/fd.o 00:02:44.651 CXX test/cpp_headers/event.o 00:02:44.651 CXX test/cpp_headers/fsdev.o 00:02:44.651 CXX test/cpp_headers/file.o 00:02:44.651 CXX test/cpp_headers/fsdev_module.o 00:02:44.651 CXX test/cpp_headers/ftl.o 00:02:44.651 CXX test/cpp_headers/fuse_dispatcher.o 00:02:44.651 CXX test/cpp_headers/hexlify.o 00:02:44.651 CXX test/cpp_headers/idxd_spec.o 00:02:44.651 CXX test/cpp_headers/gpt_spec.o 00:02:44.651 CXX test/cpp_headers/histogram_data.o 00:02:44.651 CXX test/cpp_headers/idxd.o 00:02:44.651 CXX test/cpp_headers/init.o 00:02:44.651 CXX test/cpp_headers/ioat.o 00:02:44.651 CXX test/cpp_headers/ioat_spec.o 00:02:44.651 CXX test/cpp_headers/iscsi_spec.o 00:02:44.651 CXX test/cpp_headers/json.o 00:02:44.651 CXX test/cpp_headers/keyring.o 00:02:44.651 CXX test/cpp_headers/keyring_module.o 00:02:44.651 CXX test/cpp_headers/likely.o 00:02:44.651 CXX test/cpp_headers/log.o 00:02:44.651 CXX test/cpp_headers/jsonrpc.o 00:02:44.651 CC examples/ioat/perf/perf.o 00:02:44.651 CXX test/cpp_headers/lvol.o 00:02:44.651 CXX test/cpp_headers/memory.o 00:02:44.651 CXX test/cpp_headers/mmio.o 00:02:44.651 CXX test/cpp_headers/nbd.o 00:02:44.651 CXX test/cpp_headers/md5.o 00:02:44.651 CXX test/cpp_headers/net.o 00:02:44.651 CXX test/cpp_headers/notify.o 00:02:44.651 CXX test/cpp_headers/nvme.o 00:02:44.651 CXX test/cpp_headers/nvme_intel.o 00:02:44.651 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:44.651 CXX test/cpp_headers/nvme_ocssd.o 00:02:44.651 CXX test/cpp_headers/nvme_zns.o 00:02:44.651 CC examples/util/zipf/zipf.o 00:02:44.651 CXX test/cpp_headers/nvmf_cmd.o 00:02:44.651 CXX test/cpp_headers/nvme_spec.o 00:02:44.651 CXX test/cpp_headers/nvmf.o 00:02:44.651 CXX test/cpp_headers/nvmf_spec.o 00:02:44.651 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:44.919 CC test/thread/poller_perf/poller_perf.o 00:02:44.919 CC test/app/jsoncat/jsoncat.o 00:02:44.919 CXX test/cpp_headers/nvmf_transport.o 00:02:44.919 CXX test/cpp_headers/opal.o 00:02:44.919 CXX test/cpp_headers/pci_ids.o 00:02:44.919 CC test/app/histogram_perf/histogram_perf.o 00:02:44.919 CXX test/cpp_headers/opal_spec.o 00:02:44.919 CC examples/ioat/verify/verify.o 00:02:44.919 CXX test/cpp_headers/pipe.o 00:02:44.919 CXX test/cpp_headers/queue.o 00:02:44.919 CXX test/cpp_headers/reduce.o 00:02:44.919 CXX test/cpp_headers/scsi.o 00:02:44.919 CXX test/cpp_headers/scheduler.o 00:02:44.919 CXX test/cpp_headers/rpc.o 00:02:44.919 LINK spdk_lspci 00:02:44.919 CXX test/cpp_headers/scsi_spec.o 00:02:44.919 CXX test/cpp_headers/sock.o 00:02:44.919 CXX test/cpp_headers/stdinc.o 00:02:44.919 CXX test/cpp_headers/thread.o 00:02:44.919 CXX test/cpp_headers/string.o 00:02:44.919 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:44.919 CXX test/cpp_headers/trace_parser.o 00:02:44.919 CXX test/cpp_headers/trace.o 00:02:44.919 CXX test/cpp_headers/ublk.o 00:02:44.919 CXX test/cpp_headers/tree.o 00:02:44.919 CXX test/cpp_headers/util.o 00:02:44.919 CXX test/cpp_headers/uuid.o 00:02:44.919 CC test/env/memory/memory_ut.o 00:02:44.919 CC test/env/pci/pci_ut.o 00:02:44.919 CXX test/cpp_headers/version.o 00:02:44.919 CC app/fio/nvme/fio_plugin.o 00:02:44.919 CC test/app/bdev_svc/bdev_svc.o 00:02:44.919 CXX test/cpp_headers/vfio_user_spec.o 00:02:44.919 CC test/app/stub/stub.o 00:02:44.919 CXX test/cpp_headers/vfio_user_pci.o 00:02:44.919 CXX test/cpp_headers/vhost.o 00:02:44.919 CXX test/cpp_headers/zipf.o 00:02:44.919 CXX test/cpp_headers/xor.o 00:02:44.919 CXX test/cpp_headers/vmd.o 00:02:44.919 CC test/dma/test_dma/test_dma.o 00:02:44.919 CC test/env/vtophys/vtophys.o 00:02:44.919 CC app/fio/bdev/fio_plugin.o 00:02:44.919 LINK spdk_nvme_discover 00:02:44.920 LINK rpc_client_test 00:02:45.225 LINK spdk_trace_record 00:02:45.225 LINK nvmf_tgt 00:02:45.225 LINK interrupt_tgt 00:02:45.532 LINK iscsi_tgt 00:02:45.532 CC test/env/mem_callbacks/mem_callbacks.o 00:02:45.532 LINK spdk_dd 00:02:45.532 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:45.532 LINK spdk_tgt 00:02:45.532 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:45.532 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:45.532 LINK stub 00:02:45.532 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:45.532 LINK histogram_perf 00:02:45.532 LINK spdk_trace 00:02:45.819 LINK jsoncat 00:02:45.819 LINK env_dpdk_post_init 00:02:45.819 LINK ioat_perf 00:02:45.819 LINK poller_perf 00:02:45.819 LINK zipf 00:02:45.819 LINK bdev_svc 00:02:45.819 LINK vtophys 00:02:46.102 LINK verify 00:02:46.102 CC app/vhost/vhost.o 00:02:46.102 LINK nvme_fuzz 00:02:46.364 LINK vhost_fuzz 00:02:46.365 LINK pci_ut 00:02:46.365 LINK spdk_nvme 00:02:46.365 LINK test_dma 00:02:46.365 LINK spdk_bdev 00:02:46.365 LINK spdk_top 00:02:46.365 LINK vhost 00:02:46.365 LINK mem_callbacks 00:02:46.365 CC test/event/reactor/reactor.o 00:02:46.365 LINK spdk_nvme_perf 00:02:46.365 CC test/event/reactor_perf/reactor_perf.o 00:02:46.365 CC test/event/event_perf/event_perf.o 00:02:46.365 CC examples/vmd/led/led.o 00:02:46.365 LINK spdk_nvme_identify 00:02:46.365 CC test/event/app_repeat/app_repeat.o 00:02:46.365 CC examples/idxd/perf/perf.o 00:02:46.365 CC examples/sock/hello_world/hello_sock.o 00:02:46.365 CC examples/vmd/lsvmd/lsvmd.o 00:02:46.365 CC test/event/scheduler/scheduler.o 00:02:46.365 CC examples/thread/thread/thread_ex.o 00:02:46.625 LINK reactor 00:02:46.625 LINK reactor_perf 00:02:46.625 LINK event_perf 00:02:46.625 LINK led 00:02:46.625 LINK lsvmd 00:02:46.625 LINK app_repeat 00:02:46.625 LINK hello_sock 00:02:46.625 LINK scheduler 00:02:46.885 LINK thread 00:02:46.885 LINK idxd_perf 00:02:46.885 LINK memory_ut 00:02:46.885 CC test/nvme/aer/aer.o 00:02:46.885 CC test/nvme/sgl/sgl.o 00:02:46.885 CC test/nvme/err_injection/err_injection.o 00:02:46.885 CC test/nvme/boot_partition/boot_partition.o 00:02:46.885 CC test/nvme/cuse/cuse.o 00:02:46.885 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:46.885 CC test/nvme/overhead/overhead.o 00:02:46.885 CC test/nvme/e2edp/nvme_dp.o 00:02:46.885 CC test/nvme/reset/reset.o 00:02:46.885 CC test/nvme/startup/startup.o 00:02:46.885 CC test/nvme/simple_copy/simple_copy.o 00:02:46.885 CC test/nvme/fused_ordering/fused_ordering.o 00:02:46.885 CC test/nvme/reserve/reserve.o 00:02:46.885 CC test/nvme/connect_stress/connect_stress.o 00:02:46.885 CC test/nvme/compliance/nvme_compliance.o 00:02:46.885 CC test/nvme/fdp/fdp.o 00:02:46.885 CC test/accel/dif/dif.o 00:02:46.885 CC test/blobfs/mkfs/mkfs.o 00:02:47.145 CC test/lvol/esnap/esnap.o 00:02:47.145 LINK err_injection 00:02:47.145 LINK boot_partition 00:02:47.145 LINK doorbell_aers 00:02:47.145 LINK startup 00:02:47.145 LINK connect_stress 00:02:47.145 LINK fused_ordering 00:02:47.145 LINK reserve 00:02:47.145 LINK simple_copy 00:02:47.145 LINK sgl 00:02:47.145 LINK aer 00:02:47.145 LINK reset 00:02:47.405 LINK mkfs 00:02:47.405 LINK overhead 00:02:47.405 LINK iscsi_fuzz 00:02:47.405 LINK nvme_dp 00:02:47.405 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:47.405 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:47.405 CC examples/nvme/abort/abort.o 00:02:47.405 CC examples/nvme/reconnect/reconnect.o 00:02:47.405 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:47.405 LINK nvme_compliance 00:02:47.405 CC examples/nvme/hotplug/hotplug.o 00:02:47.405 CC examples/nvme/hello_world/hello_world.o 00:02:47.405 CC examples/nvme/arbitration/arbitration.o 00:02:47.405 LINK fdp 00:02:47.405 CC examples/accel/perf/accel_perf.o 00:02:47.405 CC examples/blob/cli/blobcli.o 00:02:47.405 CC examples/blob/hello_world/hello_blob.o 00:02:47.405 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:47.665 LINK cmb_copy 00:02:47.665 LINK pmr_persistence 00:02:47.665 LINK hello_world 00:02:47.665 LINK hotplug 00:02:47.665 LINK dif 00:02:47.665 LINK reconnect 00:02:47.665 LINK abort 00:02:47.665 LINK arbitration 00:02:47.665 LINK hello_blob 00:02:47.927 LINK nvme_manage 00:02:47.927 LINK hello_fsdev 00:02:47.927 LINK accel_perf 00:02:47.927 LINK blobcli 00:02:48.187 LINK cuse 00:02:48.187 CC test/bdev/bdevio/bdevio.o 00:02:48.448 CC examples/bdev/hello_world/hello_bdev.o 00:02:48.448 CC examples/bdev/bdevperf/bdevperf.o 00:02:48.709 LINK bdevio 00:02:48.709 LINK hello_bdev 00:02:49.280 LINK bdevperf 00:02:49.850 CC examples/nvmf/nvmf/nvmf.o 00:02:50.111 LINK nvmf 00:02:51.495 LINK esnap 00:02:52.068 00:02:52.068 real 0m56.882s 00:02:52.068 user 8m6.681s 00:02:52.068 sys 5m36.382s 00:02:52.068 11:03:44 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:52.068 11:03:44 make -- common/autotest_common.sh@10 -- $ set +x 00:02:52.068 ************************************ 00:02:52.068 END TEST make 00:02:52.068 ************************************ 00:02:52.068 11:03:44 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:52.068 11:03:44 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:52.068 11:03:44 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:52.068 11:03:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.068 11:03:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:52.068 11:03:44 -- pm/common@44 -- $ pid=2404860 00:02:52.068 11:03:44 -- pm/common@50 -- $ kill -TERM 2404860 00:02:52.068 11:03:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.068 11:03:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:52.068 11:03:44 -- pm/common@44 -- $ pid=2404861 00:02:52.068 11:03:44 -- pm/common@50 -- $ kill -TERM 2404861 00:02:52.068 11:03:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.068 11:03:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:52.068 11:03:44 -- pm/common@44 -- $ pid=2404863 00:02:52.068 11:03:44 -- pm/common@50 -- $ kill -TERM 2404863 00:02:52.068 11:03:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.068 11:03:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:52.068 11:03:44 -- pm/common@44 -- $ pid=2404887 00:02:52.068 11:03:44 -- pm/common@50 -- $ sudo -E kill -TERM 2404887 00:02:52.068 11:03:44 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:52.068 11:03:44 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:52.068 11:03:44 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:52.068 11:03:44 -- common/autotest_common.sh@1693 -- # lcov --version 00:02:52.068 11:03:44 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:52.068 11:03:44 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:52.068 11:03:44 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:52.068 11:03:44 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:52.068 11:03:44 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:52.068 11:03:44 -- scripts/common.sh@336 -- # IFS=.-: 00:02:52.068 11:03:44 -- scripts/common.sh@336 -- # read -ra ver1 00:02:52.068 11:03:44 -- scripts/common.sh@337 -- # IFS=.-: 00:02:52.069 11:03:44 -- scripts/common.sh@337 -- # read -ra ver2 00:02:52.069 11:03:44 -- scripts/common.sh@338 -- # local 'op=<' 00:02:52.069 11:03:44 -- scripts/common.sh@340 -- # ver1_l=2 00:02:52.069 11:03:44 -- scripts/common.sh@341 -- # ver2_l=1 00:02:52.069 11:03:44 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:52.069 11:03:44 -- scripts/common.sh@344 -- # case "$op" in 00:02:52.069 11:03:44 -- scripts/common.sh@345 -- # : 1 00:02:52.069 11:03:44 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:52.069 11:03:44 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:52.069 11:03:44 -- scripts/common.sh@365 -- # decimal 1 00:02:52.069 11:03:44 -- scripts/common.sh@353 -- # local d=1 00:02:52.069 11:03:44 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:52.069 11:03:44 -- scripts/common.sh@355 -- # echo 1 00:02:52.069 11:03:44 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:52.069 11:03:44 -- scripts/common.sh@366 -- # decimal 2 00:02:52.069 11:03:44 -- scripts/common.sh@353 -- # local d=2 00:02:52.069 11:03:44 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:52.069 11:03:44 -- scripts/common.sh@355 -- # echo 2 00:02:52.069 11:03:44 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:52.069 11:03:44 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:52.069 11:03:44 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:52.069 11:03:44 -- scripts/common.sh@368 -- # return 0 00:02:52.069 11:03:44 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:52.069 11:03:44 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:52.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:52.069 --rc genhtml_branch_coverage=1 00:02:52.069 --rc genhtml_function_coverage=1 00:02:52.069 --rc genhtml_legend=1 00:02:52.069 --rc geninfo_all_blocks=1 00:02:52.069 --rc geninfo_unexecuted_blocks=1 00:02:52.069 00:02:52.069 ' 00:02:52.069 11:03:44 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:52.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:52.069 --rc genhtml_branch_coverage=1 00:02:52.069 --rc genhtml_function_coverage=1 00:02:52.069 --rc genhtml_legend=1 00:02:52.069 --rc geninfo_all_blocks=1 00:02:52.069 --rc geninfo_unexecuted_blocks=1 00:02:52.069 00:02:52.069 ' 00:02:52.069 11:03:44 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:52.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:52.069 --rc genhtml_branch_coverage=1 00:02:52.069 --rc genhtml_function_coverage=1 00:02:52.069 --rc genhtml_legend=1 00:02:52.069 --rc geninfo_all_blocks=1 00:02:52.069 --rc geninfo_unexecuted_blocks=1 00:02:52.069 00:02:52.069 ' 00:02:52.069 11:03:44 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:52.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:52.069 --rc genhtml_branch_coverage=1 00:02:52.069 --rc genhtml_function_coverage=1 00:02:52.069 --rc genhtml_legend=1 00:02:52.069 --rc geninfo_all_blocks=1 00:02:52.069 --rc geninfo_unexecuted_blocks=1 00:02:52.069 00:02:52.069 ' 00:02:52.069 11:03:44 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:52.069 11:03:44 -- nvmf/common.sh@7 -- # uname -s 00:02:52.069 11:03:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:52.069 11:03:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:52.069 11:03:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:52.069 11:03:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:52.069 11:03:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:52.069 11:03:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:52.069 11:03:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:52.069 11:03:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:52.069 11:03:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:52.069 11:03:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:52.331 11:03:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:52.331 11:03:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:52.331 11:03:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:52.331 11:03:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:52.331 11:03:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:52.331 11:03:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:52.331 11:03:44 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:52.331 11:03:44 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:52.331 11:03:44 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:52.331 11:03:44 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:52.331 11:03:44 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:52.331 11:03:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.331 11:03:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.331 11:03:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.331 11:03:44 -- paths/export.sh@5 -- # export PATH 00:02:52.331 11:03:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.331 11:03:44 -- nvmf/common.sh@51 -- # : 0 00:02:52.331 11:03:44 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:52.331 11:03:44 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:52.331 11:03:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:52.331 11:03:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:52.331 11:03:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:52.331 11:03:44 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:52.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:52.331 11:03:44 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:52.331 11:03:44 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:52.331 11:03:44 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:52.331 11:03:44 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:52.331 11:03:44 -- spdk/autotest.sh@32 -- # uname -s 00:02:52.331 11:03:44 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:52.331 11:03:44 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:52.331 11:03:44 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:52.331 11:03:44 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:52.331 11:03:44 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:52.331 11:03:44 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:52.331 11:03:44 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:52.331 11:03:44 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:52.331 11:03:44 -- spdk/autotest.sh@48 -- # udevadm_pid=2471026 00:02:52.331 11:03:44 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:52.331 11:03:44 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:52.331 11:03:44 -- pm/common@17 -- # local monitor 00:02:52.331 11:03:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.331 11:03:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.331 11:03:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.331 11:03:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.331 11:03:44 -- pm/common@21 -- # date +%s 00:02:52.331 11:03:44 -- pm/common@25 -- # sleep 1 00:02:52.331 11:03:44 -- pm/common@21 -- # date +%s 00:02:52.331 11:03:44 -- pm/common@21 -- # date +%s 00:02:52.331 11:03:44 -- pm/common@21 -- # date +%s 00:02:52.331 11:03:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732097024 00:02:52.331 11:03:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732097024 00:02:52.331 11:03:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732097024 00:02:52.331 11:03:44 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732097024 00:02:52.331 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732097024_collect-cpu-load.pm.log 00:02:52.331 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732097024_collect-vmstat.pm.log 00:02:52.331 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732097024_collect-cpu-temp.pm.log 00:02:52.331 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732097024_collect-bmc-pm.bmc.pm.log 00:02:53.275 11:03:45 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:53.275 11:03:45 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:53.275 11:03:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:53.275 11:03:45 -- common/autotest_common.sh@10 -- # set +x 00:02:53.275 11:03:45 -- spdk/autotest.sh@59 -- # create_test_list 00:02:53.275 11:03:45 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:53.275 11:03:45 -- common/autotest_common.sh@10 -- # set +x 00:02:53.275 11:03:45 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:53.275 11:03:45 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:53.275 11:03:45 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:53.275 11:03:45 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:53.275 11:03:45 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:53.275 11:03:45 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:53.275 11:03:45 -- common/autotest_common.sh@1457 -- # uname 00:02:53.275 11:03:45 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:53.275 11:03:45 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:53.275 11:03:45 -- common/autotest_common.sh@1477 -- # uname 00:02:53.275 11:03:45 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:53.275 11:03:45 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:53.275 11:03:45 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:53.536 lcov: LCOV version 1.15 00:02:53.536 11:03:46 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:08.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:08.443 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:23.356 11:04:16 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:23.356 11:04:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:23.356 11:04:16 -- common/autotest_common.sh@10 -- # set +x 00:03:23.356 11:04:16 -- spdk/autotest.sh@78 -- # rm -f 00:03:23.356 11:04:16 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:27.565 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:27.565 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:27.565 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:27.565 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:27.565 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:27.565 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:27.565 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:27.565 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:27.565 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:27.565 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:27.565 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:27.565 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:27.565 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:27.565 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:27.565 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:27.565 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:27.565 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:27.565 11:04:20 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:27.565 11:04:20 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:27.565 11:04:20 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:27.565 11:04:20 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:27.565 11:04:20 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:27.565 11:04:20 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:27.565 11:04:20 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:27.565 11:04:20 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:27.565 11:04:20 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:27.565 11:04:20 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:27.565 11:04:20 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:27.565 11:04:20 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:27.565 11:04:20 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:27.565 11:04:20 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:27.565 11:04:20 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:27.565 No valid GPT data, bailing 00:03:27.565 11:04:20 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:27.565 11:04:20 -- scripts/common.sh@394 -- # pt= 00:03:27.565 11:04:20 -- scripts/common.sh@395 -- # return 1 00:03:27.565 11:04:20 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:27.565 1+0 records in 00:03:27.565 1+0 records out 00:03:27.565 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.004692 s, 223 MB/s 00:03:27.565 11:04:20 -- spdk/autotest.sh@105 -- # sync 00:03:27.565 11:04:20 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:27.565 11:04:20 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:27.565 11:04:20 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:37.572 11:04:28 -- spdk/autotest.sh@111 -- # uname -s 00:03:37.572 11:04:28 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:37.572 11:04:28 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:37.572 11:04:28 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:39.489 Hugepages 00:03:39.489 node hugesize free / total 00:03:39.489 node0 1048576kB 0 / 0 00:03:39.750 node0 2048kB 0 / 0 00:03:39.750 node1 1048576kB 0 / 0 00:03:39.750 node1 2048kB 0 / 0 00:03:39.750 00:03:39.750 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:39.750 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:39.750 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:39.750 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:39.750 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:39.750 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:39.750 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:39.750 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:39.750 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:39.750 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:39.750 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:39.750 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:39.750 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:39.750 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:39.750 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:39.750 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:39.750 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:39.750 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:39.750 11:04:32 -- spdk/autotest.sh@117 -- # uname -s 00:03:39.750 11:04:32 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:39.750 11:04:32 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:39.750 11:04:32 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:43.962 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:43.962 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:43.962 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:43.962 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:43.962 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:43.962 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:43.962 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:43.962 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:43.962 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:43.962 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:43.962 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:43.962 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:43.962 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:43.962 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:43.962 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:43.962 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:45.349 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:45.611 11:04:38 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:46.555 11:04:39 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:46.555 11:04:39 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:46.555 11:04:39 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:46.555 11:04:39 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:46.555 11:04:39 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:46.555 11:04:39 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:46.555 11:04:39 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:46.555 11:04:39 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:46.555 11:04:39 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:46.816 11:04:39 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:46.816 11:04:39 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:03:46.816 11:04:39 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:50.121 Waiting for block devices as requested 00:03:50.121 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:50.121 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:50.382 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:50.382 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:50.382 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:50.643 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:50.643 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:50.643 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:50.905 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:03:50.905 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:51.165 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:51.165 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:51.165 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:51.426 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:51.426 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:51.426 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:51.686 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:51.946 11:04:44 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:51.946 11:04:44 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:03:51.946 11:04:44 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:51.946 11:04:44 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:03:51.946 11:04:44 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:51.946 11:04:44 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:03:51.946 11:04:44 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:51.946 11:04:44 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:51.946 11:04:44 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:51.946 11:04:44 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:51.946 11:04:44 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:51.946 11:04:44 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:51.947 11:04:44 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:51.947 11:04:44 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:03:51.947 11:04:44 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:51.947 11:04:44 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:51.947 11:04:44 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:51.947 11:04:44 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:51.947 11:04:44 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:51.947 11:04:44 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:51.947 11:04:44 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:51.947 11:04:44 -- common/autotest_common.sh@1543 -- # continue 00:03:51.947 11:04:44 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:51.947 11:04:44 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:51.947 11:04:44 -- common/autotest_common.sh@10 -- # set +x 00:03:51.947 11:04:44 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:51.947 11:04:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:51.947 11:04:44 -- common/autotest_common.sh@10 -- # set +x 00:03:51.947 11:04:44 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:56.148 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:56.148 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:56.148 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:56.148 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:56.148 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:56.148 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:56.148 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:56.148 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:56.148 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:56.148 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:56.148 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:56.148 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:56.148 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:56.148 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:56.148 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:56.148 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:56.148 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:56.148 11:04:48 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:56.148 11:04:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:56.148 11:04:48 -- common/autotest_common.sh@10 -- # set +x 00:03:56.148 11:04:48 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:56.148 11:04:48 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:56.148 11:04:48 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:56.148 11:04:48 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:56.148 11:04:48 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:56.148 11:04:48 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:56.148 11:04:48 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:56.148 11:04:48 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:56.148 11:04:48 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:56.148 11:04:48 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:56.148 11:04:48 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:56.148 11:04:48 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:56.148 11:04:48 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:56.148 11:04:48 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:56.148 11:04:48 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:03:56.148 11:04:48 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:56.148 11:04:48 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:03:56.148 11:04:48 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:03:56.148 11:04:48 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:03:56.148 11:04:48 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:03:56.148 11:04:48 -- common/autotest_common.sh@1572 -- # return 0 00:03:56.148 11:04:48 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:03:56.148 11:04:48 -- common/autotest_common.sh@1580 -- # return 0 00:03:56.148 11:04:48 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:56.148 11:04:48 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:56.148 11:04:48 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:56.148 11:04:48 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:56.148 11:04:48 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:56.148 11:04:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:56.148 11:04:48 -- common/autotest_common.sh@10 -- # set +x 00:03:56.148 11:04:48 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:56.148 11:04:48 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:56.148 11:04:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:56.148 11:04:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:56.148 11:04:48 -- common/autotest_common.sh@10 -- # set +x 00:03:56.148 ************************************ 00:03:56.148 START TEST env 00:03:56.148 ************************************ 00:03:56.148 11:04:48 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:56.409 * Looking for test storage... 00:03:56.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:56.409 11:04:48 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:56.409 11:04:48 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:56.409 11:04:48 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:56.409 11:04:49 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:56.409 11:04:49 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:56.409 11:04:49 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:56.409 11:04:49 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:56.409 11:04:49 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:56.409 11:04:49 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:56.409 11:04:49 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:56.409 11:04:49 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:56.409 11:04:49 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:56.409 11:04:49 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:56.409 11:04:49 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:56.409 11:04:49 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:56.409 11:04:49 env -- scripts/common.sh@344 -- # case "$op" in 00:03:56.409 11:04:49 env -- scripts/common.sh@345 -- # : 1 00:03:56.409 11:04:49 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:56.409 11:04:49 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:56.409 11:04:49 env -- scripts/common.sh@365 -- # decimal 1 00:03:56.409 11:04:49 env -- scripts/common.sh@353 -- # local d=1 00:03:56.409 11:04:49 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:56.409 11:04:49 env -- scripts/common.sh@355 -- # echo 1 00:03:56.409 11:04:49 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:56.409 11:04:49 env -- scripts/common.sh@366 -- # decimal 2 00:03:56.409 11:04:49 env -- scripts/common.sh@353 -- # local d=2 00:03:56.409 11:04:49 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:56.409 11:04:49 env -- scripts/common.sh@355 -- # echo 2 00:03:56.409 11:04:49 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:56.409 11:04:49 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:56.409 11:04:49 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:56.409 11:04:49 env -- scripts/common.sh@368 -- # return 0 00:03:56.409 11:04:49 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:56.409 11:04:49 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:56.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.409 --rc genhtml_branch_coverage=1 00:03:56.409 --rc genhtml_function_coverage=1 00:03:56.409 --rc genhtml_legend=1 00:03:56.409 --rc geninfo_all_blocks=1 00:03:56.409 --rc geninfo_unexecuted_blocks=1 00:03:56.409 00:03:56.409 ' 00:03:56.409 11:04:49 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:56.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.409 --rc genhtml_branch_coverage=1 00:03:56.409 --rc genhtml_function_coverage=1 00:03:56.409 --rc genhtml_legend=1 00:03:56.409 --rc geninfo_all_blocks=1 00:03:56.409 --rc geninfo_unexecuted_blocks=1 00:03:56.409 00:03:56.409 ' 00:03:56.409 11:04:49 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:56.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.409 --rc genhtml_branch_coverage=1 00:03:56.409 --rc genhtml_function_coverage=1 00:03:56.409 --rc genhtml_legend=1 00:03:56.409 --rc geninfo_all_blocks=1 00:03:56.409 --rc geninfo_unexecuted_blocks=1 00:03:56.409 00:03:56.409 ' 00:03:56.409 11:04:49 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:56.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.409 --rc genhtml_branch_coverage=1 00:03:56.409 --rc genhtml_function_coverage=1 00:03:56.409 --rc genhtml_legend=1 00:03:56.409 --rc geninfo_all_blocks=1 00:03:56.409 --rc geninfo_unexecuted_blocks=1 00:03:56.409 00:03:56.409 ' 00:03:56.409 11:04:49 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:56.410 11:04:49 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:56.410 11:04:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:56.410 11:04:49 env -- common/autotest_common.sh@10 -- # set +x 00:03:56.410 ************************************ 00:03:56.410 START TEST env_memory 00:03:56.410 ************************************ 00:03:56.410 11:04:49 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:56.410 00:03:56.410 00:03:56.410 CUnit - A unit testing framework for C - Version 2.1-3 00:03:56.410 http://cunit.sourceforge.net/ 00:03:56.410 00:03:56.410 00:03:56.410 Suite: memory 00:03:56.410 Test: alloc and free memory map ...[2024-11-20 11:04:49.145358] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:56.670 passed 00:03:56.670 Test: mem map translation ...[2024-11-20 11:04:49.170850] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:56.670 [2024-11-20 11:04:49.170879] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:56.670 [2024-11-20 11:04:49.170925] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:56.671 [2024-11-20 11:04:49.170933] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:56.671 passed 00:03:56.671 Test: mem map registration ...[2024-11-20 11:04:49.226060] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:56.671 [2024-11-20 11:04:49.226096] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:56.671 passed 00:03:56.671 Test: mem map adjacent registrations ...passed 00:03:56.671 00:03:56.671 Run Summary: Type Total Ran Passed Failed Inactive 00:03:56.671 suites 1 1 n/a 0 0 00:03:56.671 tests 4 4 4 0 0 00:03:56.671 asserts 152 152 152 0 n/a 00:03:56.671 00:03:56.671 Elapsed time = 0.193 seconds 00:03:56.671 00:03:56.671 real 0m0.208s 00:03:56.671 user 0m0.192s 00:03:56.671 sys 0m0.015s 00:03:56.671 11:04:49 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:56.671 11:04:49 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:56.671 ************************************ 00:03:56.671 END TEST env_memory 00:03:56.671 ************************************ 00:03:56.671 11:04:49 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:56.671 11:04:49 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:56.671 11:04:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:56.671 11:04:49 env -- common/autotest_common.sh@10 -- # set +x 00:03:56.671 ************************************ 00:03:56.671 START TEST env_vtophys 00:03:56.671 ************************************ 00:03:56.671 11:04:49 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:56.671 EAL: lib.eal log level changed from notice to debug 00:03:56.671 EAL: Detected lcore 0 as core 0 on socket 0 00:03:56.671 EAL: Detected lcore 1 as core 1 on socket 0 00:03:56.671 EAL: Detected lcore 2 as core 2 on socket 0 00:03:56.671 EAL: Detected lcore 3 as core 3 on socket 0 00:03:56.671 EAL: Detected lcore 4 as core 4 on socket 0 00:03:56.671 EAL: Detected lcore 5 as core 5 on socket 0 00:03:56.671 EAL: Detected lcore 6 as core 6 on socket 0 00:03:56.671 EAL: Detected lcore 7 as core 7 on socket 0 00:03:56.671 EAL: Detected lcore 8 as core 8 on socket 0 00:03:56.671 EAL: Detected lcore 9 as core 9 on socket 0 00:03:56.671 EAL: Detected lcore 10 as core 10 on socket 0 00:03:56.671 EAL: Detected lcore 11 as core 11 on socket 0 00:03:56.671 EAL: Detected lcore 12 as core 12 on socket 0 00:03:56.671 EAL: Detected lcore 13 as core 13 on socket 0 00:03:56.671 EAL: Detected lcore 14 as core 14 on socket 0 00:03:56.671 EAL: Detected lcore 15 as core 15 on socket 0 00:03:56.671 EAL: Detected lcore 16 as core 16 on socket 0 00:03:56.671 EAL: Detected lcore 17 as core 17 on socket 0 00:03:56.671 EAL: Detected lcore 18 as core 18 on socket 0 00:03:56.671 EAL: Detected lcore 19 as core 19 on socket 0 00:03:56.671 EAL: Detected lcore 20 as core 20 on socket 0 00:03:56.671 EAL: Detected lcore 21 as core 21 on socket 0 00:03:56.671 EAL: Detected lcore 22 as core 22 on socket 0 00:03:56.671 EAL: Detected lcore 23 as core 23 on socket 0 00:03:56.671 EAL: Detected lcore 24 as core 24 on socket 0 00:03:56.671 EAL: Detected lcore 25 as core 25 on socket 0 00:03:56.671 EAL: Detected lcore 26 as core 26 on socket 0 00:03:56.671 EAL: Detected lcore 27 as core 27 on socket 0 00:03:56.671 EAL: Detected lcore 28 as core 28 on socket 0 00:03:56.671 EAL: Detected lcore 29 as core 29 on socket 0 00:03:56.671 EAL: Detected lcore 30 as core 30 on socket 0 00:03:56.671 EAL: Detected lcore 31 as core 31 on socket 0 00:03:56.671 EAL: Detected lcore 32 as core 32 on socket 0 00:03:56.671 EAL: Detected lcore 33 as core 33 on socket 0 00:03:56.671 EAL: Detected lcore 34 as core 34 on socket 0 00:03:56.671 EAL: Detected lcore 35 as core 35 on socket 0 00:03:56.671 EAL: Detected lcore 36 as core 0 on socket 1 00:03:56.671 EAL: Detected lcore 37 as core 1 on socket 1 00:03:56.671 EAL: Detected lcore 38 as core 2 on socket 1 00:03:56.671 EAL: Detected lcore 39 as core 3 on socket 1 00:03:56.671 EAL: Detected lcore 40 as core 4 on socket 1 00:03:56.671 EAL: Detected lcore 41 as core 5 on socket 1 00:03:56.671 EAL: Detected lcore 42 as core 6 on socket 1 00:03:56.671 EAL: Detected lcore 43 as core 7 on socket 1 00:03:56.671 EAL: Detected lcore 44 as core 8 on socket 1 00:03:56.671 EAL: Detected lcore 45 as core 9 on socket 1 00:03:56.671 EAL: Detected lcore 46 as core 10 on socket 1 00:03:56.671 EAL: Detected lcore 47 as core 11 on socket 1 00:03:56.671 EAL: Detected lcore 48 as core 12 on socket 1 00:03:56.671 EAL: Detected lcore 49 as core 13 on socket 1 00:03:56.671 EAL: Detected lcore 50 as core 14 on socket 1 00:03:56.671 EAL: Detected lcore 51 as core 15 on socket 1 00:03:56.671 EAL: Detected lcore 52 as core 16 on socket 1 00:03:56.671 EAL: Detected lcore 53 as core 17 on socket 1 00:03:56.671 EAL: Detected lcore 54 as core 18 on socket 1 00:03:56.671 EAL: Detected lcore 55 as core 19 on socket 1 00:03:56.671 EAL: Detected lcore 56 as core 20 on socket 1 00:03:56.671 EAL: Detected lcore 57 as core 21 on socket 1 00:03:56.671 EAL: Detected lcore 58 as core 22 on socket 1 00:03:56.671 EAL: Detected lcore 59 as core 23 on socket 1 00:03:56.671 EAL: Detected lcore 60 as core 24 on socket 1 00:03:56.671 EAL: Detected lcore 61 as core 25 on socket 1 00:03:56.671 EAL: Detected lcore 62 as core 26 on socket 1 00:03:56.671 EAL: Detected lcore 63 as core 27 on socket 1 00:03:56.671 EAL: Detected lcore 64 as core 28 on socket 1 00:03:56.671 EAL: Detected lcore 65 as core 29 on socket 1 00:03:56.671 EAL: Detected lcore 66 as core 30 on socket 1 00:03:56.671 EAL: Detected lcore 67 as core 31 on socket 1 00:03:56.671 EAL: Detected lcore 68 as core 32 on socket 1 00:03:56.671 EAL: Detected lcore 69 as core 33 on socket 1 00:03:56.671 EAL: Detected lcore 70 as core 34 on socket 1 00:03:56.671 EAL: Detected lcore 71 as core 35 on socket 1 00:03:56.671 EAL: Detected lcore 72 as core 0 on socket 0 00:03:56.671 EAL: Detected lcore 73 as core 1 on socket 0 00:03:56.671 EAL: Detected lcore 74 as core 2 on socket 0 00:03:56.671 EAL: Detected lcore 75 as core 3 on socket 0 00:03:56.671 EAL: Detected lcore 76 as core 4 on socket 0 00:03:56.671 EAL: Detected lcore 77 as core 5 on socket 0 00:03:56.671 EAL: Detected lcore 78 as core 6 on socket 0 00:03:56.671 EAL: Detected lcore 79 as core 7 on socket 0 00:03:56.671 EAL: Detected lcore 80 as core 8 on socket 0 00:03:56.671 EAL: Detected lcore 81 as core 9 on socket 0 00:03:56.671 EAL: Detected lcore 82 as core 10 on socket 0 00:03:56.671 EAL: Detected lcore 83 as core 11 on socket 0 00:03:56.671 EAL: Detected lcore 84 as core 12 on socket 0 00:03:56.671 EAL: Detected lcore 85 as core 13 on socket 0 00:03:56.671 EAL: Detected lcore 86 as core 14 on socket 0 00:03:56.671 EAL: Detected lcore 87 as core 15 on socket 0 00:03:56.671 EAL: Detected lcore 88 as core 16 on socket 0 00:03:56.671 EAL: Detected lcore 89 as core 17 on socket 0 00:03:56.671 EAL: Detected lcore 90 as core 18 on socket 0 00:03:56.671 EAL: Detected lcore 91 as core 19 on socket 0 00:03:56.671 EAL: Detected lcore 92 as core 20 on socket 0 00:03:56.671 EAL: Detected lcore 93 as core 21 on socket 0 00:03:56.671 EAL: Detected lcore 94 as core 22 on socket 0 00:03:56.671 EAL: Detected lcore 95 as core 23 on socket 0 00:03:56.671 EAL: Detected lcore 96 as core 24 on socket 0 00:03:56.671 EAL: Detected lcore 97 as core 25 on socket 0 00:03:56.671 EAL: Detected lcore 98 as core 26 on socket 0 00:03:56.671 EAL: Detected lcore 99 as core 27 on socket 0 00:03:56.671 EAL: Detected lcore 100 as core 28 on socket 0 00:03:56.671 EAL: Detected lcore 101 as core 29 on socket 0 00:03:56.671 EAL: Detected lcore 102 as core 30 on socket 0 00:03:56.672 EAL: Detected lcore 103 as core 31 on socket 0 00:03:56.672 EAL: Detected lcore 104 as core 32 on socket 0 00:03:56.672 EAL: Detected lcore 105 as core 33 on socket 0 00:03:56.672 EAL: Detected lcore 106 as core 34 on socket 0 00:03:56.672 EAL: Detected lcore 107 as core 35 on socket 0 00:03:56.672 EAL: Detected lcore 108 as core 0 on socket 1 00:03:56.672 EAL: Detected lcore 109 as core 1 on socket 1 00:03:56.672 EAL: Detected lcore 110 as core 2 on socket 1 00:03:56.672 EAL: Detected lcore 111 as core 3 on socket 1 00:03:56.672 EAL: Detected lcore 112 as core 4 on socket 1 00:03:56.672 EAL: Detected lcore 113 as core 5 on socket 1 00:03:56.672 EAL: Detected lcore 114 as core 6 on socket 1 00:03:56.672 EAL: Detected lcore 115 as core 7 on socket 1 00:03:56.672 EAL: Detected lcore 116 as core 8 on socket 1 00:03:56.672 EAL: Detected lcore 117 as core 9 on socket 1 00:03:56.672 EAL: Detected lcore 118 as core 10 on socket 1 00:03:56.672 EAL: Detected lcore 119 as core 11 on socket 1 00:03:56.672 EAL: Detected lcore 120 as core 12 on socket 1 00:03:56.672 EAL: Detected lcore 121 as core 13 on socket 1 00:03:56.932 EAL: Detected lcore 122 as core 14 on socket 1 00:03:56.932 EAL: Detected lcore 123 as core 15 on socket 1 00:03:56.932 EAL: Detected lcore 124 as core 16 on socket 1 00:03:56.932 EAL: Detected lcore 125 as core 17 on socket 1 00:03:56.932 EAL: Detected lcore 126 as core 18 on socket 1 00:03:56.932 EAL: Detected lcore 127 as core 19 on socket 1 00:03:56.932 EAL: Skipped lcore 128 as core 20 on socket 1 00:03:56.932 EAL: Skipped lcore 129 as core 21 on socket 1 00:03:56.932 EAL: Skipped lcore 130 as core 22 on socket 1 00:03:56.932 EAL: Skipped lcore 131 as core 23 on socket 1 00:03:56.932 EAL: Skipped lcore 132 as core 24 on socket 1 00:03:56.932 EAL: Skipped lcore 133 as core 25 on socket 1 00:03:56.932 EAL: Skipped lcore 134 as core 26 on socket 1 00:03:56.932 EAL: Skipped lcore 135 as core 27 on socket 1 00:03:56.932 EAL: Skipped lcore 136 as core 28 on socket 1 00:03:56.932 EAL: Skipped lcore 137 as core 29 on socket 1 00:03:56.932 EAL: Skipped lcore 138 as core 30 on socket 1 00:03:56.932 EAL: Skipped lcore 139 as core 31 on socket 1 00:03:56.932 EAL: Skipped lcore 140 as core 32 on socket 1 00:03:56.932 EAL: Skipped lcore 141 as core 33 on socket 1 00:03:56.932 EAL: Skipped lcore 142 as core 34 on socket 1 00:03:56.932 EAL: Skipped lcore 143 as core 35 on socket 1 00:03:56.932 EAL: Maximum logical cores by configuration: 128 00:03:56.932 EAL: Detected CPU lcores: 128 00:03:56.932 EAL: Detected NUMA nodes: 2 00:03:56.932 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:56.932 EAL: Detected shared linkage of DPDK 00:03:56.932 EAL: No shared files mode enabled, IPC will be disabled 00:03:56.932 EAL: Bus pci wants IOVA as 'DC' 00:03:56.932 EAL: Buses did not request a specific IOVA mode. 00:03:56.932 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:56.932 EAL: Selected IOVA mode 'VA' 00:03:56.932 EAL: Probing VFIO support... 00:03:56.932 EAL: IOMMU type 1 (Type 1) is supported 00:03:56.932 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:56.932 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:56.932 EAL: VFIO support initialized 00:03:56.932 EAL: Ask a virtual area of 0x2e000 bytes 00:03:56.933 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:56.933 EAL: Setting up physically contiguous memory... 00:03:56.933 EAL: Setting maximum number of open files to 524288 00:03:56.933 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:56.933 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:56.933 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:56.933 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.933 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:56.933 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.933 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.933 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:56.933 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:56.933 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.933 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:56.933 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.933 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.933 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:56.933 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:56.933 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.933 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:56.933 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.933 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.933 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:56.933 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:56.933 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.933 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:56.933 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.933 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.933 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:56.933 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:56.933 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:56.933 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.933 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:56.933 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:56.933 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.933 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:56.933 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:56.933 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.933 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:56.933 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:56.933 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.933 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:56.933 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:56.933 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.933 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:56.933 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:56.933 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.933 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:56.933 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:56.933 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.933 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:56.933 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:56.933 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.933 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:56.933 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:56.933 EAL: Hugepages will be freed exactly as allocated. 00:03:56.933 EAL: No shared files mode enabled, IPC is disabled 00:03:56.933 EAL: No shared files mode enabled, IPC is disabled 00:03:56.933 EAL: TSC frequency is ~2400000 KHz 00:03:56.933 EAL: Main lcore 0 is ready (tid=7f42fcd02a00;cpuset=[0]) 00:03:56.933 EAL: Trying to obtain current memory policy. 00:03:56.933 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.933 EAL: Restoring previous memory policy: 0 00:03:56.933 EAL: request: mp_malloc_sync 00:03:56.933 EAL: No shared files mode enabled, IPC is disabled 00:03:56.933 EAL: Heap on socket 0 was expanded by 2MB 00:03:56.933 EAL: No shared files mode enabled, IPC is disabled 00:03:56.933 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:56.933 EAL: Mem event callback 'spdk:(nil)' registered 00:03:56.933 00:03:56.933 00:03:56.933 CUnit - A unit testing framework for C - Version 2.1-3 00:03:56.933 http://cunit.sourceforge.net/ 00:03:56.933 00:03:56.933 00:03:56.933 Suite: components_suite 00:03:56.933 Test: vtophys_malloc_test ...passed 00:03:56.933 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:56.933 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.933 EAL: Restoring previous memory policy: 4 00:03:56.933 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.933 EAL: request: mp_malloc_sync 00:03:56.933 EAL: No shared files mode enabled, IPC is disabled 00:03:56.933 EAL: Heap on socket 0 was expanded by 4MB 00:03:56.933 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.933 EAL: request: mp_malloc_sync 00:03:56.933 EAL: No shared files mode enabled, IPC is disabled 00:03:56.933 EAL: Heap on socket 0 was shrunk by 4MB 00:03:56.933 EAL: Trying to obtain current memory policy. 00:03:56.933 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.933 EAL: Restoring previous memory policy: 4 00:03:56.933 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.933 EAL: request: mp_malloc_sync 00:03:56.933 EAL: No shared files mode enabled, IPC is disabled 00:03:56.933 EAL: Heap on socket 0 was expanded by 6MB 00:03:56.933 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.933 EAL: request: mp_malloc_sync 00:03:56.933 EAL: No shared files mode enabled, IPC is disabled 00:03:56.933 EAL: Heap on socket 0 was shrunk by 6MB 00:03:56.933 EAL: Trying to obtain current memory policy. 00:03:56.933 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.933 EAL: Restoring previous memory policy: 4 00:03:56.933 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.933 EAL: request: mp_malloc_sync 00:03:56.933 EAL: No shared files mode enabled, IPC is disabled 00:03:56.933 EAL: Heap on socket 0 was expanded by 10MB 00:03:56.933 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.933 EAL: request: mp_malloc_sync 00:03:56.933 EAL: No shared files mode enabled, IPC is disabled 00:03:56.933 EAL: Heap on socket 0 was shrunk by 10MB 00:03:56.933 EAL: Trying to obtain current memory policy. 00:03:56.933 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.933 EAL: Restoring previous memory policy: 4 00:03:56.933 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.933 EAL: request: mp_malloc_sync 00:03:56.933 EAL: No shared files mode enabled, IPC is disabled 00:03:56.933 EAL: Heap on socket 0 was expanded by 18MB 00:03:56.933 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.933 EAL: request: mp_malloc_sync 00:03:56.933 EAL: No shared files mode enabled, IPC is disabled 00:03:56.933 EAL: Heap on socket 0 was shrunk by 18MB 00:03:56.933 EAL: Trying to obtain current memory policy. 00:03:56.933 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.933 EAL: Restoring previous memory policy: 4 00:03:56.933 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.933 EAL: request: mp_malloc_sync 00:03:56.933 EAL: No shared files mode enabled, IPC is disabled 00:03:56.933 EAL: Heap on socket 0 was expanded by 34MB 00:03:56.933 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.933 EAL: request: mp_malloc_sync 00:03:56.933 EAL: No shared files mode enabled, IPC is disabled 00:03:56.933 EAL: Heap on socket 0 was shrunk by 34MB 00:03:56.933 EAL: Trying to obtain current memory policy. 00:03:56.933 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.933 EAL: Restoring previous memory policy: 4 00:03:56.933 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.933 EAL: request: mp_malloc_sync 00:03:56.933 EAL: No shared files mode enabled, IPC is disabled 00:03:56.933 EAL: Heap on socket 0 was expanded by 66MB 00:03:56.933 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.933 EAL: request: mp_malloc_sync 00:03:56.933 EAL: No shared files mode enabled, IPC is disabled 00:03:56.933 EAL: Heap on socket 0 was shrunk by 66MB 00:03:56.933 EAL: Trying to obtain current memory policy. 00:03:56.933 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.933 EAL: Restoring previous memory policy: 4 00:03:56.933 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.933 EAL: request: mp_malloc_sync 00:03:56.933 EAL: No shared files mode enabled, IPC is disabled 00:03:56.933 EAL: Heap on socket 0 was expanded by 130MB 00:03:56.933 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.933 EAL: request: mp_malloc_sync 00:03:56.933 EAL: No shared files mode enabled, IPC is disabled 00:03:56.933 EAL: Heap on socket 0 was shrunk by 130MB 00:03:56.933 EAL: Trying to obtain current memory policy. 00:03:56.933 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.933 EAL: Restoring previous memory policy: 4 00:03:56.933 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.933 EAL: request: mp_malloc_sync 00:03:56.933 EAL: No shared files mode enabled, IPC is disabled 00:03:56.933 EAL: Heap on socket 0 was expanded by 258MB 00:03:56.933 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.195 EAL: request: mp_malloc_sync 00:03:57.195 EAL: No shared files mode enabled, IPC is disabled 00:03:57.195 EAL: Heap on socket 0 was shrunk by 258MB 00:03:57.195 EAL: Trying to obtain current memory policy. 00:03:57.195 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.195 EAL: Restoring previous memory policy: 4 00:03:57.195 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.195 EAL: request: mp_malloc_sync 00:03:57.195 EAL: No shared files mode enabled, IPC is disabled 00:03:57.195 EAL: Heap on socket 0 was expanded by 514MB 00:03:57.195 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.195 EAL: request: mp_malloc_sync 00:03:57.195 EAL: No shared files mode enabled, IPC is disabled 00:03:57.195 EAL: Heap on socket 0 was shrunk by 514MB 00:03:57.195 EAL: Trying to obtain current memory policy. 00:03:57.195 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.460 EAL: Restoring previous memory policy: 4 00:03:57.460 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.460 EAL: request: mp_malloc_sync 00:03:57.460 EAL: No shared files mode enabled, IPC is disabled 00:03:57.460 EAL: Heap on socket 0 was expanded by 1026MB 00:03:57.460 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.775 EAL: request: mp_malloc_sync 00:03:57.775 EAL: No shared files mode enabled, IPC is disabled 00:03:57.775 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:57.775 passed 00:03:57.775 00:03:57.775 Run Summary: Type Total Ran Passed Failed Inactive 00:03:57.775 suites 1 1 n/a 0 0 00:03:57.775 tests 2 2 2 0 0 00:03:57.775 asserts 497 497 497 0 n/a 00:03:57.775 00:03:57.775 Elapsed time = 0.689 seconds 00:03:57.775 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.775 EAL: request: mp_malloc_sync 00:03:57.775 EAL: No shared files mode enabled, IPC is disabled 00:03:57.775 EAL: Heap on socket 0 was shrunk by 2MB 00:03:57.775 EAL: No shared files mode enabled, IPC is disabled 00:03:57.775 EAL: No shared files mode enabled, IPC is disabled 00:03:57.775 EAL: No shared files mode enabled, IPC is disabled 00:03:57.775 00:03:57.775 real 0m0.837s 00:03:57.775 user 0m0.433s 00:03:57.775 sys 0m0.380s 00:03:57.775 11:04:50 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.775 11:04:50 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:57.775 ************************************ 00:03:57.775 END TEST env_vtophys 00:03:57.775 ************************************ 00:03:57.775 11:04:50 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:57.775 11:04:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.775 11:04:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.775 11:04:50 env -- common/autotest_common.sh@10 -- # set +x 00:03:57.775 ************************************ 00:03:57.775 START TEST env_pci 00:03:57.775 ************************************ 00:03:57.775 11:04:50 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:57.775 00:03:57.775 00:03:57.775 CUnit - A unit testing framework for C - Version 2.1-3 00:03:57.775 http://cunit.sourceforge.net/ 00:03:57.775 00:03:57.775 00:03:57.775 Suite: pci 00:03:57.775 Test: pci_hook ...[2024-11-20 11:04:50.317262] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2490430 has claimed it 00:03:57.775 EAL: Cannot find device (10000:00:01.0) 00:03:57.775 EAL: Failed to attach device on primary process 00:03:57.775 passed 00:03:57.775 00:03:57.775 Run Summary: Type Total Ran Passed Failed Inactive 00:03:57.775 suites 1 1 n/a 0 0 00:03:57.775 tests 1 1 1 0 0 00:03:57.775 asserts 25 25 25 0 n/a 00:03:57.775 00:03:57.775 Elapsed time = 0.031 seconds 00:03:57.775 00:03:57.775 real 0m0.053s 00:03:57.775 user 0m0.020s 00:03:57.775 sys 0m0.033s 00:03:57.775 11:04:50 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.775 11:04:50 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:57.775 ************************************ 00:03:57.775 END TEST env_pci 00:03:57.775 ************************************ 00:03:57.775 11:04:50 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:57.775 11:04:50 env -- env/env.sh@15 -- # uname 00:03:57.775 11:04:50 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:57.775 11:04:50 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:57.775 11:04:50 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:57.775 11:04:50 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:57.775 11:04:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.775 11:04:50 env -- common/autotest_common.sh@10 -- # set +x 00:03:57.775 ************************************ 00:03:57.775 START TEST env_dpdk_post_init 00:03:57.775 ************************************ 00:03:57.775 11:04:50 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:57.775 EAL: Detected CPU lcores: 128 00:03:57.775 EAL: Detected NUMA nodes: 2 00:03:57.775 EAL: Detected shared linkage of DPDK 00:03:57.775 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:58.069 EAL: Selected IOVA mode 'VA' 00:03:58.069 EAL: VFIO support initialized 00:03:58.069 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:58.069 EAL: Using IOMMU type 1 (Type 1) 00:03:58.069 EAL: Ignore mapping IO port bar(1) 00:03:58.343 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:03:58.343 EAL: Ignore mapping IO port bar(1) 00:03:58.343 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:03:58.604 EAL: Ignore mapping IO port bar(1) 00:03:58.604 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:03:58.865 EAL: Ignore mapping IO port bar(1) 00:03:58.865 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:03:58.865 EAL: Ignore mapping IO port bar(1) 00:03:59.126 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:03:59.126 EAL: Ignore mapping IO port bar(1) 00:03:59.387 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:03:59.387 EAL: Ignore mapping IO port bar(1) 00:03:59.648 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:03:59.648 EAL: Ignore mapping IO port bar(1) 00:03:59.648 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:03:59.909 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:00.170 EAL: Ignore mapping IO port bar(1) 00:04:00.170 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:00.430 EAL: Ignore mapping IO port bar(1) 00:04:00.430 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:00.430 EAL: Ignore mapping IO port bar(1) 00:04:00.690 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:00.690 EAL: Ignore mapping IO port bar(1) 00:04:00.957 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:00.958 EAL: Ignore mapping IO port bar(1) 00:04:01.224 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:01.224 EAL: Ignore mapping IO port bar(1) 00:04:01.224 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:01.484 EAL: Ignore mapping IO port bar(1) 00:04:01.484 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:01.744 EAL: Ignore mapping IO port bar(1) 00:04:01.744 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:01.744 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:01.744 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:02.006 Starting DPDK initialization... 00:04:02.006 Starting SPDK post initialization... 00:04:02.006 SPDK NVMe probe 00:04:02.006 Attaching to 0000:65:00.0 00:04:02.006 Attached to 0000:65:00.0 00:04:02.006 Cleaning up... 00:04:03.921 00:04:03.921 real 0m5.745s 00:04:03.921 user 0m0.120s 00:04:03.921 sys 0m0.183s 00:04:03.921 11:04:56 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.921 11:04:56 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:03.921 ************************************ 00:04:03.921 END TEST env_dpdk_post_init 00:04:03.921 ************************************ 00:04:03.921 11:04:56 env -- env/env.sh@26 -- # uname 00:04:03.922 11:04:56 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:03.922 11:04:56 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:03.922 11:04:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.922 11:04:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.922 11:04:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:03.922 ************************************ 00:04:03.922 START TEST env_mem_callbacks 00:04:03.922 ************************************ 00:04:03.922 11:04:56 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:03.922 EAL: Detected CPU lcores: 128 00:04:03.922 EAL: Detected NUMA nodes: 2 00:04:03.922 EAL: Detected shared linkage of DPDK 00:04:03.922 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:03.922 EAL: Selected IOVA mode 'VA' 00:04:03.922 EAL: VFIO support initialized 00:04:03.922 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:03.922 00:04:03.922 00:04:03.922 CUnit - A unit testing framework for C - Version 2.1-3 00:04:03.922 http://cunit.sourceforge.net/ 00:04:03.922 00:04:03.922 00:04:03.922 Suite: memory 00:04:03.922 Test: test ... 00:04:03.922 register 0x200000200000 2097152 00:04:03.922 malloc 3145728 00:04:03.922 register 0x200000400000 4194304 00:04:03.922 buf 0x200000500000 len 3145728 PASSED 00:04:03.922 malloc 64 00:04:03.922 buf 0x2000004fff40 len 64 PASSED 00:04:03.922 malloc 4194304 00:04:03.922 register 0x200000800000 6291456 00:04:03.922 buf 0x200000a00000 len 4194304 PASSED 00:04:03.922 free 0x200000500000 3145728 00:04:03.922 free 0x2000004fff40 64 00:04:03.922 unregister 0x200000400000 4194304 PASSED 00:04:03.922 free 0x200000a00000 4194304 00:04:03.922 unregister 0x200000800000 6291456 PASSED 00:04:03.922 malloc 8388608 00:04:03.922 register 0x200000400000 10485760 00:04:03.922 buf 0x200000600000 len 8388608 PASSED 00:04:03.922 free 0x200000600000 8388608 00:04:03.922 unregister 0x200000400000 10485760 PASSED 00:04:03.922 passed 00:04:03.922 00:04:03.922 Run Summary: Type Total Ran Passed Failed Inactive 00:04:03.922 suites 1 1 n/a 0 0 00:04:03.922 tests 1 1 1 0 0 00:04:03.922 asserts 15 15 15 0 n/a 00:04:03.922 00:04:03.922 Elapsed time = 0.010 seconds 00:04:03.922 00:04:03.922 real 0m0.070s 00:04:03.922 user 0m0.018s 00:04:03.922 sys 0m0.052s 00:04:03.922 11:04:56 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.922 11:04:56 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:03.922 ************************************ 00:04:03.922 END TEST env_mem_callbacks 00:04:03.922 ************************************ 00:04:03.922 00:04:03.922 real 0m7.536s 00:04:03.922 user 0m1.057s 00:04:03.922 sys 0m1.048s 00:04:03.922 11:04:56 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.922 11:04:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:03.922 ************************************ 00:04:03.922 END TEST env 00:04:03.922 ************************************ 00:04:03.922 11:04:56 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:03.922 11:04:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.922 11:04:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.922 11:04:56 -- common/autotest_common.sh@10 -- # set +x 00:04:03.922 ************************************ 00:04:03.922 START TEST rpc 00:04:03.922 ************************************ 00:04:03.922 11:04:56 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:03.922 * Looking for test storage... 00:04:03.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:03.922 11:04:56 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:03.922 11:04:56 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:03.922 11:04:56 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:03.922 11:04:56 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:03.922 11:04:56 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:03.922 11:04:56 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:03.922 11:04:56 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:03.922 11:04:56 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:03.922 11:04:56 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:03.922 11:04:56 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:03.922 11:04:56 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:03.922 11:04:56 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:03.922 11:04:56 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:03.922 11:04:56 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:03.922 11:04:56 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:03.922 11:04:56 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:03.922 11:04:56 rpc -- scripts/common.sh@345 -- # : 1 00:04:03.922 11:04:56 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:03.922 11:04:56 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:04.184 11:04:56 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:04.184 11:04:56 rpc -- scripts/common.sh@353 -- # local d=1 00:04:04.184 11:04:56 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:04.184 11:04:56 rpc -- scripts/common.sh@355 -- # echo 1 00:04:04.184 11:04:56 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:04.184 11:04:56 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:04.184 11:04:56 rpc -- scripts/common.sh@353 -- # local d=2 00:04:04.184 11:04:56 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:04.184 11:04:56 rpc -- scripts/common.sh@355 -- # echo 2 00:04:04.184 11:04:56 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:04.184 11:04:56 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:04.184 11:04:56 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:04.184 11:04:56 rpc -- scripts/common.sh@368 -- # return 0 00:04:04.184 11:04:56 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:04.184 11:04:56 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:04.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.184 --rc genhtml_branch_coverage=1 00:04:04.184 --rc genhtml_function_coverage=1 00:04:04.184 --rc genhtml_legend=1 00:04:04.184 --rc geninfo_all_blocks=1 00:04:04.184 --rc geninfo_unexecuted_blocks=1 00:04:04.184 00:04:04.184 ' 00:04:04.184 11:04:56 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:04.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.184 --rc genhtml_branch_coverage=1 00:04:04.184 --rc genhtml_function_coverage=1 00:04:04.184 --rc genhtml_legend=1 00:04:04.184 --rc geninfo_all_blocks=1 00:04:04.184 --rc geninfo_unexecuted_blocks=1 00:04:04.184 00:04:04.184 ' 00:04:04.184 11:04:56 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:04.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.184 --rc genhtml_branch_coverage=1 00:04:04.184 --rc genhtml_function_coverage=1 00:04:04.184 --rc genhtml_legend=1 00:04:04.184 --rc geninfo_all_blocks=1 00:04:04.184 --rc geninfo_unexecuted_blocks=1 00:04:04.184 00:04:04.184 ' 00:04:04.184 11:04:56 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:04.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.184 --rc genhtml_branch_coverage=1 00:04:04.184 --rc genhtml_function_coverage=1 00:04:04.184 --rc genhtml_legend=1 00:04:04.184 --rc geninfo_all_blocks=1 00:04:04.184 --rc geninfo_unexecuted_blocks=1 00:04:04.184 00:04:04.184 ' 00:04:04.184 11:04:56 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2491763 00:04:04.184 11:04:56 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:04.184 11:04:56 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:04.184 11:04:56 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2491763 00:04:04.184 11:04:56 rpc -- common/autotest_common.sh@835 -- # '[' -z 2491763 ']' 00:04:04.184 11:04:56 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:04.184 11:04:56 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:04.184 11:04:56 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:04.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:04.184 11:04:56 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:04.184 11:04:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.184 [2024-11-20 11:04:56.745087] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:04:04.184 [2024-11-20 11:04:56.745157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2491763 ] 00:04:04.184 [2024-11-20 11:04:56.837531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.184 [2024-11-20 11:04:56.889811] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:04.184 [2024-11-20 11:04:56.889871] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2491763' to capture a snapshot of events at runtime. 00:04:04.184 [2024-11-20 11:04:56.889882] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:04.184 [2024-11-20 11:04:56.889890] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:04.184 [2024-11-20 11:04:56.889896] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2491763 for offline analysis/debug. 00:04:04.184 [2024-11-20 11:04:56.890716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.128 11:04:57 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:05.128 11:04:57 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:05.128 11:04:57 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:05.128 11:04:57 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:05.128 11:04:57 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:05.128 11:04:57 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:05.128 11:04:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.128 11:04:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.128 11:04:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.128 ************************************ 00:04:05.128 START TEST rpc_integrity 00:04:05.128 ************************************ 00:04:05.128 11:04:57 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:05.128 11:04:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:05.128 11:04:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.128 11:04:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.128 11:04:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.128 11:04:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:05.128 11:04:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:05.128 11:04:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:05.128 11:04:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:05.128 11:04:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.128 11:04:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.128 11:04:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.128 11:04:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:05.128 11:04:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:05.128 11:04:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.128 11:04:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.128 11:04:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.128 11:04:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:05.128 { 00:04:05.128 "name": "Malloc0", 00:04:05.128 "aliases": [ 00:04:05.128 "fff096ab-c309-4b64-b6b9-6d4d38f300be" 00:04:05.128 ], 00:04:05.128 "product_name": "Malloc disk", 00:04:05.128 "block_size": 512, 00:04:05.128 "num_blocks": 16384, 00:04:05.128 "uuid": "fff096ab-c309-4b64-b6b9-6d4d38f300be", 00:04:05.128 "assigned_rate_limits": { 00:04:05.128 "rw_ios_per_sec": 0, 00:04:05.128 "rw_mbytes_per_sec": 0, 00:04:05.128 "r_mbytes_per_sec": 0, 00:04:05.128 "w_mbytes_per_sec": 0 00:04:05.128 }, 00:04:05.128 "claimed": false, 00:04:05.128 "zoned": false, 00:04:05.128 "supported_io_types": { 00:04:05.128 "read": true, 00:04:05.128 "write": true, 00:04:05.128 "unmap": true, 00:04:05.128 "flush": true, 00:04:05.128 "reset": true, 00:04:05.128 "nvme_admin": false, 00:04:05.128 "nvme_io": false, 00:04:05.128 "nvme_io_md": false, 00:04:05.128 "write_zeroes": true, 00:04:05.128 "zcopy": true, 00:04:05.128 "get_zone_info": false, 00:04:05.128 "zone_management": false, 00:04:05.128 "zone_append": false, 00:04:05.128 "compare": false, 00:04:05.128 "compare_and_write": false, 00:04:05.128 "abort": true, 00:04:05.128 "seek_hole": false, 00:04:05.128 "seek_data": false, 00:04:05.128 "copy": true, 00:04:05.128 "nvme_iov_md": false 00:04:05.128 }, 00:04:05.128 "memory_domains": [ 00:04:05.128 { 00:04:05.128 "dma_device_id": "system", 00:04:05.129 "dma_device_type": 1 00:04:05.129 }, 00:04:05.129 { 00:04:05.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:05.129 "dma_device_type": 2 00:04:05.129 } 00:04:05.129 ], 00:04:05.129 "driver_specific": {} 00:04:05.129 } 00:04:05.129 ]' 00:04:05.129 11:04:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:05.129 11:04:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:05.129 11:04:57 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:05.129 11:04:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.129 11:04:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.129 [2024-11-20 11:04:57.748571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:05.129 [2024-11-20 11:04:57.748616] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:05.129 [2024-11-20 11:04:57.748633] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23c6db0 00:04:05.129 [2024-11-20 11:04:57.748641] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:05.129 [2024-11-20 11:04:57.750240] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:05.129 [2024-11-20 11:04:57.750278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:05.129 Passthru0 00:04:05.129 11:04:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.129 11:04:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:05.129 11:04:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.129 11:04:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.129 11:04:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.129 11:04:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:05.129 { 00:04:05.129 "name": "Malloc0", 00:04:05.129 "aliases": [ 00:04:05.129 "fff096ab-c309-4b64-b6b9-6d4d38f300be" 00:04:05.129 ], 00:04:05.129 "product_name": "Malloc disk", 00:04:05.129 "block_size": 512, 00:04:05.129 "num_blocks": 16384, 00:04:05.129 "uuid": "fff096ab-c309-4b64-b6b9-6d4d38f300be", 00:04:05.129 "assigned_rate_limits": { 00:04:05.129 "rw_ios_per_sec": 0, 00:04:05.129 "rw_mbytes_per_sec": 0, 00:04:05.129 "r_mbytes_per_sec": 0, 00:04:05.129 "w_mbytes_per_sec": 0 00:04:05.129 }, 00:04:05.129 "claimed": true, 00:04:05.129 "claim_type": "exclusive_write", 00:04:05.129 "zoned": false, 00:04:05.129 "supported_io_types": { 00:04:05.129 "read": true, 00:04:05.129 "write": true, 00:04:05.129 "unmap": true, 00:04:05.129 "flush": true, 00:04:05.129 "reset": true, 00:04:05.129 "nvme_admin": false, 00:04:05.129 "nvme_io": false, 00:04:05.129 "nvme_io_md": false, 00:04:05.129 "write_zeroes": true, 00:04:05.129 "zcopy": true, 00:04:05.129 "get_zone_info": false, 00:04:05.129 "zone_management": false, 00:04:05.129 "zone_append": false, 00:04:05.129 "compare": false, 00:04:05.129 "compare_and_write": false, 00:04:05.129 "abort": true, 00:04:05.129 "seek_hole": false, 00:04:05.129 "seek_data": false, 00:04:05.129 "copy": true, 00:04:05.129 "nvme_iov_md": false 00:04:05.129 }, 00:04:05.129 "memory_domains": [ 00:04:05.129 { 00:04:05.129 "dma_device_id": "system", 00:04:05.129 "dma_device_type": 1 00:04:05.129 }, 00:04:05.129 { 00:04:05.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:05.129 "dma_device_type": 2 00:04:05.129 } 00:04:05.129 ], 00:04:05.129 "driver_specific": {} 00:04:05.129 }, 00:04:05.129 { 00:04:05.129 "name": "Passthru0", 00:04:05.129 "aliases": [ 00:04:05.129 "d1e4a672-cae4-55d6-9b45-22c044b13b0c" 00:04:05.129 ], 00:04:05.129 "product_name": "passthru", 00:04:05.129 "block_size": 512, 00:04:05.129 "num_blocks": 16384, 00:04:05.129 "uuid": "d1e4a672-cae4-55d6-9b45-22c044b13b0c", 00:04:05.129 "assigned_rate_limits": { 00:04:05.129 "rw_ios_per_sec": 0, 00:04:05.129 "rw_mbytes_per_sec": 0, 00:04:05.129 "r_mbytes_per_sec": 0, 00:04:05.129 "w_mbytes_per_sec": 0 00:04:05.129 }, 00:04:05.129 "claimed": false, 00:04:05.129 "zoned": false, 00:04:05.129 "supported_io_types": { 00:04:05.129 "read": true, 00:04:05.129 "write": true, 00:04:05.129 "unmap": true, 00:04:05.129 "flush": true, 00:04:05.129 "reset": true, 00:04:05.129 "nvme_admin": false, 00:04:05.129 "nvme_io": false, 00:04:05.129 "nvme_io_md": false, 00:04:05.129 "write_zeroes": true, 00:04:05.129 "zcopy": true, 00:04:05.129 "get_zone_info": false, 00:04:05.129 "zone_management": false, 00:04:05.129 "zone_append": false, 00:04:05.129 "compare": false, 00:04:05.129 "compare_and_write": false, 00:04:05.129 "abort": true, 00:04:05.129 "seek_hole": false, 00:04:05.129 "seek_data": false, 00:04:05.129 "copy": true, 00:04:05.129 "nvme_iov_md": false 00:04:05.129 }, 00:04:05.129 "memory_domains": [ 00:04:05.129 { 00:04:05.129 "dma_device_id": "system", 00:04:05.129 "dma_device_type": 1 00:04:05.129 }, 00:04:05.129 { 00:04:05.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:05.129 "dma_device_type": 2 00:04:05.129 } 00:04:05.129 ], 00:04:05.129 "driver_specific": { 00:04:05.129 "passthru": { 00:04:05.129 "name": "Passthru0", 00:04:05.129 "base_bdev_name": "Malloc0" 00:04:05.129 } 00:04:05.129 } 00:04:05.129 } 00:04:05.129 ]' 00:04:05.129 11:04:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:05.129 11:04:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:05.129 11:04:57 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:05.129 11:04:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.129 11:04:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.129 11:04:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.129 11:04:57 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:05.129 11:04:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.129 11:04:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.129 11:04:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.129 11:04:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:05.129 11:04:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.129 11:04:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.129 11:04:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.129 11:04:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:05.129 11:04:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:05.390 11:04:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:05.390 00:04:05.390 real 0m0.302s 00:04:05.390 user 0m0.186s 00:04:05.390 sys 0m0.047s 00:04:05.390 11:04:57 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.390 11:04:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.390 ************************************ 00:04:05.390 END TEST rpc_integrity 00:04:05.390 ************************************ 00:04:05.391 11:04:57 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:05.391 11:04:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.391 11:04:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.391 11:04:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.391 ************************************ 00:04:05.391 START TEST rpc_plugins 00:04:05.391 ************************************ 00:04:05.391 11:04:57 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:05.391 11:04:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:05.391 11:04:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.391 11:04:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:05.391 11:04:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.391 11:04:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:05.391 11:04:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:05.391 11:04:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.391 11:04:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:05.391 11:04:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.391 11:04:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:05.391 { 00:04:05.391 "name": "Malloc1", 00:04:05.391 "aliases": [ 00:04:05.391 "e8517403-09c3-4901-85df-345f70d850f1" 00:04:05.391 ], 00:04:05.391 "product_name": "Malloc disk", 00:04:05.391 "block_size": 4096, 00:04:05.391 "num_blocks": 256, 00:04:05.391 "uuid": "e8517403-09c3-4901-85df-345f70d850f1", 00:04:05.391 "assigned_rate_limits": { 00:04:05.391 "rw_ios_per_sec": 0, 00:04:05.391 "rw_mbytes_per_sec": 0, 00:04:05.391 "r_mbytes_per_sec": 0, 00:04:05.391 "w_mbytes_per_sec": 0 00:04:05.391 }, 00:04:05.391 "claimed": false, 00:04:05.391 "zoned": false, 00:04:05.391 "supported_io_types": { 00:04:05.391 "read": true, 00:04:05.391 "write": true, 00:04:05.391 "unmap": true, 00:04:05.391 "flush": true, 00:04:05.391 "reset": true, 00:04:05.391 "nvme_admin": false, 00:04:05.391 "nvme_io": false, 00:04:05.391 "nvme_io_md": false, 00:04:05.391 "write_zeroes": true, 00:04:05.391 "zcopy": true, 00:04:05.391 "get_zone_info": false, 00:04:05.391 "zone_management": false, 00:04:05.391 "zone_append": false, 00:04:05.391 "compare": false, 00:04:05.391 "compare_and_write": false, 00:04:05.391 "abort": true, 00:04:05.391 "seek_hole": false, 00:04:05.391 "seek_data": false, 00:04:05.391 "copy": true, 00:04:05.391 "nvme_iov_md": false 00:04:05.391 }, 00:04:05.391 "memory_domains": [ 00:04:05.391 { 00:04:05.391 "dma_device_id": "system", 00:04:05.391 "dma_device_type": 1 00:04:05.391 }, 00:04:05.391 { 00:04:05.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:05.391 "dma_device_type": 2 00:04:05.391 } 00:04:05.391 ], 00:04:05.391 "driver_specific": {} 00:04:05.391 } 00:04:05.391 ]' 00:04:05.391 11:04:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:05.391 11:04:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:05.391 11:04:58 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:05.391 11:04:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.391 11:04:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:05.391 11:04:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.391 11:04:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:05.391 11:04:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.391 11:04:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:05.391 11:04:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.391 11:04:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:05.391 11:04:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:05.652 11:04:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:05.652 00:04:05.652 real 0m0.161s 00:04:05.652 user 0m0.093s 00:04:05.652 sys 0m0.027s 00:04:05.652 11:04:58 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.652 11:04:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:05.652 ************************************ 00:04:05.652 END TEST rpc_plugins 00:04:05.652 ************************************ 00:04:05.652 11:04:58 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:05.652 11:04:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.652 11:04:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.652 11:04:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.652 ************************************ 00:04:05.652 START TEST rpc_trace_cmd_test 00:04:05.652 ************************************ 00:04:05.652 11:04:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:05.652 11:04:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:05.652 11:04:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:05.652 11:04:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.652 11:04:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:05.652 11:04:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.652 11:04:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:05.652 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2491763", 00:04:05.652 "tpoint_group_mask": "0x8", 00:04:05.652 "iscsi_conn": { 00:04:05.652 "mask": "0x2", 00:04:05.652 "tpoint_mask": "0x0" 00:04:05.652 }, 00:04:05.652 "scsi": { 00:04:05.652 "mask": "0x4", 00:04:05.652 "tpoint_mask": "0x0" 00:04:05.652 }, 00:04:05.652 "bdev": { 00:04:05.652 "mask": "0x8", 00:04:05.652 "tpoint_mask": "0xffffffffffffffff" 00:04:05.652 }, 00:04:05.652 "nvmf_rdma": { 00:04:05.652 "mask": "0x10", 00:04:05.652 "tpoint_mask": "0x0" 00:04:05.652 }, 00:04:05.652 "nvmf_tcp": { 00:04:05.652 "mask": "0x20", 00:04:05.652 "tpoint_mask": "0x0" 00:04:05.652 }, 00:04:05.652 "ftl": { 00:04:05.652 "mask": "0x40", 00:04:05.652 "tpoint_mask": "0x0" 00:04:05.652 }, 00:04:05.652 "blobfs": { 00:04:05.652 "mask": "0x80", 00:04:05.652 "tpoint_mask": "0x0" 00:04:05.652 }, 00:04:05.652 "dsa": { 00:04:05.652 "mask": "0x200", 00:04:05.652 "tpoint_mask": "0x0" 00:04:05.652 }, 00:04:05.652 "thread": { 00:04:05.652 "mask": "0x400", 00:04:05.652 "tpoint_mask": "0x0" 00:04:05.652 }, 00:04:05.652 "nvme_pcie": { 00:04:05.652 "mask": "0x800", 00:04:05.652 "tpoint_mask": "0x0" 00:04:05.652 }, 00:04:05.652 "iaa": { 00:04:05.652 "mask": "0x1000", 00:04:05.652 "tpoint_mask": "0x0" 00:04:05.652 }, 00:04:05.652 "nvme_tcp": { 00:04:05.652 "mask": "0x2000", 00:04:05.652 "tpoint_mask": "0x0" 00:04:05.652 }, 00:04:05.652 "bdev_nvme": { 00:04:05.652 "mask": "0x4000", 00:04:05.652 "tpoint_mask": "0x0" 00:04:05.652 }, 00:04:05.652 "sock": { 00:04:05.652 "mask": "0x8000", 00:04:05.652 "tpoint_mask": "0x0" 00:04:05.652 }, 00:04:05.652 "blob": { 00:04:05.652 "mask": "0x10000", 00:04:05.652 "tpoint_mask": "0x0" 00:04:05.652 }, 00:04:05.652 "bdev_raid": { 00:04:05.652 "mask": "0x20000", 00:04:05.652 "tpoint_mask": "0x0" 00:04:05.652 }, 00:04:05.652 "scheduler": { 00:04:05.652 "mask": "0x40000", 00:04:05.652 "tpoint_mask": "0x0" 00:04:05.652 } 00:04:05.652 }' 00:04:05.652 11:04:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:05.652 11:04:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:05.652 11:04:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:05.652 11:04:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:05.652 11:04:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:05.914 11:04:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:05.914 11:04:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:05.914 11:04:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:05.914 11:04:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:05.914 11:04:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:05.914 00:04:05.914 real 0m0.255s 00:04:05.914 user 0m0.209s 00:04:05.914 sys 0m0.036s 00:04:05.914 11:04:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.914 11:04:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:05.914 ************************************ 00:04:05.914 END TEST rpc_trace_cmd_test 00:04:05.914 ************************************ 00:04:05.914 11:04:58 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:05.914 11:04:58 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:05.914 11:04:58 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:05.914 11:04:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.914 11:04:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.914 11:04:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.914 ************************************ 00:04:05.914 START TEST rpc_daemon_integrity 00:04:05.914 ************************************ 00:04:05.914 11:04:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:05.914 11:04:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:05.914 11:04:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.914 11:04:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.914 11:04:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.914 11:04:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:05.914 11:04:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:05.914 11:04:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:05.914 11:04:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:05.914 11:04:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.914 11:04:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:05.914 11:04:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.914 11:04:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:05.914 11:04:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:05.914 11:04:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:05.914 11:04:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.176 11:04:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.176 11:04:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:06.176 { 00:04:06.176 "name": "Malloc2", 00:04:06.176 "aliases": [ 00:04:06.176 "e3f2fe58-ff85-44ad-a8e1-d905a51c8344" 00:04:06.176 ], 00:04:06.176 "product_name": "Malloc disk", 00:04:06.176 "block_size": 512, 00:04:06.176 "num_blocks": 16384, 00:04:06.176 "uuid": "e3f2fe58-ff85-44ad-a8e1-d905a51c8344", 00:04:06.176 "assigned_rate_limits": { 00:04:06.176 "rw_ios_per_sec": 0, 00:04:06.176 "rw_mbytes_per_sec": 0, 00:04:06.176 "r_mbytes_per_sec": 0, 00:04:06.176 "w_mbytes_per_sec": 0 00:04:06.176 }, 00:04:06.176 "claimed": false, 00:04:06.176 "zoned": false, 00:04:06.176 "supported_io_types": { 00:04:06.176 "read": true, 00:04:06.176 "write": true, 00:04:06.176 "unmap": true, 00:04:06.176 "flush": true, 00:04:06.176 "reset": true, 00:04:06.176 "nvme_admin": false, 00:04:06.176 "nvme_io": false, 00:04:06.176 "nvme_io_md": false, 00:04:06.176 "write_zeroes": true, 00:04:06.176 "zcopy": true, 00:04:06.176 "get_zone_info": false, 00:04:06.176 "zone_management": false, 00:04:06.176 "zone_append": false, 00:04:06.176 "compare": false, 00:04:06.176 "compare_and_write": false, 00:04:06.176 "abort": true, 00:04:06.176 "seek_hole": false, 00:04:06.176 "seek_data": false, 00:04:06.176 "copy": true, 00:04:06.176 "nvme_iov_md": false 00:04:06.176 }, 00:04:06.176 "memory_domains": [ 00:04:06.176 { 00:04:06.176 "dma_device_id": "system", 00:04:06.176 "dma_device_type": 1 00:04:06.176 }, 00:04:06.176 { 00:04:06.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.176 "dma_device_type": 2 00:04:06.176 } 00:04:06.176 ], 00:04:06.176 "driver_specific": {} 00:04:06.176 } 00:04:06.176 ]' 00:04:06.176 11:04:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:06.176 11:04:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:06.177 11:04:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:06.177 11:04:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.177 11:04:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.177 [2024-11-20 11:04:58.715216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:06.177 [2024-11-20 11:04:58.715263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:06.177 [2024-11-20 11:04:58.715279] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24f78d0 00:04:06.177 [2024-11-20 11:04:58.715286] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:06.177 [2024-11-20 11:04:58.716757] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:06.177 [2024-11-20 11:04:58.716794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:06.177 Passthru0 00:04:06.177 11:04:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.177 11:04:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:06.177 11:04:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.177 11:04:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.177 11:04:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.177 11:04:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:06.177 { 00:04:06.177 "name": "Malloc2", 00:04:06.177 "aliases": [ 00:04:06.177 "e3f2fe58-ff85-44ad-a8e1-d905a51c8344" 00:04:06.177 ], 00:04:06.177 "product_name": "Malloc disk", 00:04:06.177 "block_size": 512, 00:04:06.177 "num_blocks": 16384, 00:04:06.177 "uuid": "e3f2fe58-ff85-44ad-a8e1-d905a51c8344", 00:04:06.177 "assigned_rate_limits": { 00:04:06.177 "rw_ios_per_sec": 0, 00:04:06.177 "rw_mbytes_per_sec": 0, 00:04:06.177 "r_mbytes_per_sec": 0, 00:04:06.177 "w_mbytes_per_sec": 0 00:04:06.177 }, 00:04:06.177 "claimed": true, 00:04:06.177 "claim_type": "exclusive_write", 00:04:06.177 "zoned": false, 00:04:06.177 "supported_io_types": { 00:04:06.177 "read": true, 00:04:06.177 "write": true, 00:04:06.177 "unmap": true, 00:04:06.177 "flush": true, 00:04:06.177 "reset": true, 00:04:06.177 "nvme_admin": false, 00:04:06.177 "nvme_io": false, 00:04:06.177 "nvme_io_md": false, 00:04:06.177 "write_zeroes": true, 00:04:06.177 "zcopy": true, 00:04:06.177 "get_zone_info": false, 00:04:06.177 "zone_management": false, 00:04:06.177 "zone_append": false, 00:04:06.177 "compare": false, 00:04:06.177 "compare_and_write": false, 00:04:06.177 "abort": true, 00:04:06.177 "seek_hole": false, 00:04:06.177 "seek_data": false, 00:04:06.177 "copy": true, 00:04:06.177 "nvme_iov_md": false 00:04:06.177 }, 00:04:06.177 "memory_domains": [ 00:04:06.177 { 00:04:06.177 "dma_device_id": "system", 00:04:06.177 "dma_device_type": 1 00:04:06.177 }, 00:04:06.177 { 00:04:06.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.177 "dma_device_type": 2 00:04:06.177 } 00:04:06.177 ], 00:04:06.177 "driver_specific": {} 00:04:06.177 }, 00:04:06.177 { 00:04:06.177 "name": "Passthru0", 00:04:06.177 "aliases": [ 00:04:06.177 "fa3ff49c-d257-5109-862f-23e8da6d6125" 00:04:06.177 ], 00:04:06.177 "product_name": "passthru", 00:04:06.177 "block_size": 512, 00:04:06.177 "num_blocks": 16384, 00:04:06.177 "uuid": "fa3ff49c-d257-5109-862f-23e8da6d6125", 00:04:06.177 "assigned_rate_limits": { 00:04:06.177 "rw_ios_per_sec": 0, 00:04:06.177 "rw_mbytes_per_sec": 0, 00:04:06.177 "r_mbytes_per_sec": 0, 00:04:06.177 "w_mbytes_per_sec": 0 00:04:06.177 }, 00:04:06.177 "claimed": false, 00:04:06.177 "zoned": false, 00:04:06.177 "supported_io_types": { 00:04:06.177 "read": true, 00:04:06.177 "write": true, 00:04:06.177 "unmap": true, 00:04:06.177 "flush": true, 00:04:06.177 "reset": true, 00:04:06.177 "nvme_admin": false, 00:04:06.177 "nvme_io": false, 00:04:06.177 "nvme_io_md": false, 00:04:06.177 "write_zeroes": true, 00:04:06.177 "zcopy": true, 00:04:06.177 "get_zone_info": false, 00:04:06.177 "zone_management": false, 00:04:06.177 "zone_append": false, 00:04:06.177 "compare": false, 00:04:06.177 "compare_and_write": false, 00:04:06.177 "abort": true, 00:04:06.177 "seek_hole": false, 00:04:06.177 "seek_data": false, 00:04:06.177 "copy": true, 00:04:06.177 "nvme_iov_md": false 00:04:06.177 }, 00:04:06.177 "memory_domains": [ 00:04:06.177 { 00:04:06.177 "dma_device_id": "system", 00:04:06.177 "dma_device_type": 1 00:04:06.177 }, 00:04:06.177 { 00:04:06.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:06.177 "dma_device_type": 2 00:04:06.177 } 00:04:06.177 ], 00:04:06.177 "driver_specific": { 00:04:06.177 "passthru": { 00:04:06.177 "name": "Passthru0", 00:04:06.177 "base_bdev_name": "Malloc2" 00:04:06.177 } 00:04:06.177 } 00:04:06.177 } 00:04:06.177 ]' 00:04:06.177 11:04:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:06.177 11:04:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:06.177 11:04:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:06.177 11:04:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.177 11:04:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.177 11:04:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.177 11:04:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:06.177 11:04:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.177 11:04:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.177 11:04:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.177 11:04:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:06.177 11:04:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:06.177 11:04:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.177 11:04:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:06.177 11:04:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:06.177 11:04:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:06.177 11:04:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:06.177 00:04:06.177 real 0m0.302s 00:04:06.177 user 0m0.192s 00:04:06.177 sys 0m0.042s 00:04:06.177 11:04:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.177 11:04:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:06.177 ************************************ 00:04:06.177 END TEST rpc_daemon_integrity 00:04:06.177 ************************************ 00:04:06.177 11:04:58 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:06.177 11:04:58 rpc -- rpc/rpc.sh@84 -- # killprocess 2491763 00:04:06.177 11:04:58 rpc -- common/autotest_common.sh@954 -- # '[' -z 2491763 ']' 00:04:06.177 11:04:58 rpc -- common/autotest_common.sh@958 -- # kill -0 2491763 00:04:06.438 11:04:58 rpc -- common/autotest_common.sh@959 -- # uname 00:04:06.438 11:04:58 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:06.438 11:04:58 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2491763 00:04:06.438 11:04:58 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:06.438 11:04:58 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:06.438 11:04:58 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2491763' 00:04:06.438 killing process with pid 2491763 00:04:06.438 11:04:58 rpc -- common/autotest_common.sh@973 -- # kill 2491763 00:04:06.438 11:04:58 rpc -- common/autotest_common.sh@978 -- # wait 2491763 00:04:06.698 00:04:06.698 real 0m2.748s 00:04:06.698 user 0m3.506s 00:04:06.698 sys 0m0.850s 00:04:06.698 11:04:59 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.698 11:04:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.698 ************************************ 00:04:06.698 END TEST rpc 00:04:06.698 ************************************ 00:04:06.698 11:04:59 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:06.698 11:04:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.698 11:04:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.698 11:04:59 -- common/autotest_common.sh@10 -- # set +x 00:04:06.698 ************************************ 00:04:06.698 START TEST skip_rpc 00:04:06.698 ************************************ 00:04:06.698 11:04:59 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:06.698 * Looking for test storage... 00:04:06.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:06.698 11:04:59 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:06.698 11:04:59 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:06.698 11:04:59 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:06.959 11:04:59 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:06.959 11:04:59 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:06.959 11:04:59 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:06.959 11:04:59 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:06.959 11:04:59 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.959 11:04:59 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:06.959 11:04:59 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:06.959 11:04:59 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:06.959 11:04:59 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:06.959 11:04:59 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:06.959 11:04:59 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:06.959 11:04:59 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:06.959 11:04:59 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:06.959 11:04:59 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:06.959 11:04:59 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:06.959 11:04:59 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.959 11:04:59 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:06.959 11:04:59 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:06.959 11:04:59 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.959 11:04:59 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:06.959 11:04:59 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:06.959 11:04:59 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:06.959 11:04:59 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:06.959 11:04:59 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.959 11:04:59 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:06.959 11:04:59 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:06.959 11:04:59 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:06.959 11:04:59 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:06.959 11:04:59 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:06.959 11:04:59 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.959 11:04:59 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:06.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.959 --rc genhtml_branch_coverage=1 00:04:06.959 --rc genhtml_function_coverage=1 00:04:06.959 --rc genhtml_legend=1 00:04:06.959 --rc geninfo_all_blocks=1 00:04:06.959 --rc geninfo_unexecuted_blocks=1 00:04:06.959 00:04:06.959 ' 00:04:06.959 11:04:59 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:06.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.959 --rc genhtml_branch_coverage=1 00:04:06.959 --rc genhtml_function_coverage=1 00:04:06.959 --rc genhtml_legend=1 00:04:06.959 --rc geninfo_all_blocks=1 00:04:06.959 --rc geninfo_unexecuted_blocks=1 00:04:06.959 00:04:06.959 ' 00:04:06.959 11:04:59 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:06.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.959 --rc genhtml_branch_coverage=1 00:04:06.959 --rc genhtml_function_coverage=1 00:04:06.959 --rc genhtml_legend=1 00:04:06.959 --rc geninfo_all_blocks=1 00:04:06.959 --rc geninfo_unexecuted_blocks=1 00:04:06.959 00:04:06.959 ' 00:04:06.959 11:04:59 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:06.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.959 --rc genhtml_branch_coverage=1 00:04:06.959 --rc genhtml_function_coverage=1 00:04:06.959 --rc genhtml_legend=1 00:04:06.959 --rc geninfo_all_blocks=1 00:04:06.959 --rc geninfo_unexecuted_blocks=1 00:04:06.959 00:04:06.959 ' 00:04:06.960 11:04:59 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:06.960 11:04:59 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:06.960 11:04:59 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:06.960 11:04:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.960 11:04:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.960 11:04:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.960 ************************************ 00:04:06.960 START TEST skip_rpc 00:04:06.960 ************************************ 00:04:06.960 11:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:06.960 11:04:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2492604 00:04:06.960 11:04:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:06.960 11:04:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:06.960 11:04:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:06.960 [2024-11-20 11:04:59.599381] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:04:06.960 [2024-11-20 11:04:59.599437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2492604 ] 00:04:06.960 [2024-11-20 11:04:59.691522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.221 [2024-11-20 11:04:59.743688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.517 11:05:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:12.517 11:05:04 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:12.517 11:05:04 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:12.517 11:05:04 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:12.517 11:05:04 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:12.517 11:05:04 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:12.517 11:05:04 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:12.517 11:05:04 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:12.517 11:05:04 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:12.517 11:05:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.517 11:05:04 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:12.517 11:05:04 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:12.517 11:05:04 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:12.517 11:05:04 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:12.517 11:05:04 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:12.517 11:05:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:12.517 11:05:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2492604 00:04:12.517 11:05:04 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2492604 ']' 00:04:12.517 11:05:04 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2492604 00:04:12.517 11:05:04 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:12.517 11:05:04 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:12.517 11:05:04 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2492604 00:04:12.517 11:05:04 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:12.517 11:05:04 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:12.517 11:05:04 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2492604' 00:04:12.517 killing process with pid 2492604 00:04:12.517 11:05:04 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2492604 00:04:12.517 11:05:04 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2492604 00:04:12.517 00:04:12.517 real 0m5.263s 00:04:12.517 user 0m5.028s 00:04:12.517 sys 0m0.286s 00:04:12.517 11:05:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.517 11:05:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.517 ************************************ 00:04:12.517 END TEST skip_rpc 00:04:12.517 ************************************ 00:04:12.517 11:05:04 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:12.517 11:05:04 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.517 11:05:04 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.517 11:05:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.517 ************************************ 00:04:12.517 START TEST skip_rpc_with_json 00:04:12.517 ************************************ 00:04:12.517 11:05:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:12.517 11:05:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:12.517 11:05:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2493650 00:04:12.517 11:05:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:12.517 11:05:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2493650 00:04:12.517 11:05:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:12.517 11:05:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2493650 ']' 00:04:12.517 11:05:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:12.517 11:05:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:12.517 11:05:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:12.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:12.517 11:05:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:12.517 11:05:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:12.517 [2024-11-20 11:05:04.940440] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:04:12.517 [2024-11-20 11:05:04.940495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2493650 ] 00:04:12.517 [2024-11-20 11:05:05.026521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.517 [2024-11-20 11:05:05.060974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.088 11:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:13.088 11:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:13.088 11:05:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:13.088 11:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.088 11:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:13.088 [2024-11-20 11:05:05.729611] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:13.088 request: 00:04:13.088 { 00:04:13.088 "trtype": "tcp", 00:04:13.088 "method": "nvmf_get_transports", 00:04:13.088 "req_id": 1 00:04:13.088 } 00:04:13.088 Got JSON-RPC error response 00:04:13.088 response: 00:04:13.088 { 00:04:13.088 "code": -19, 00:04:13.088 "message": "No such device" 00:04:13.088 } 00:04:13.088 11:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:13.088 11:05:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:13.088 11:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.088 11:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:13.088 [2024-11-20 11:05:05.741706] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:13.088 11:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.088 11:05:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:13.088 11:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.088 11:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:13.348 11:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.348 11:05:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:13.348 { 00:04:13.348 "subsystems": [ 00:04:13.348 { 00:04:13.348 "subsystem": "fsdev", 00:04:13.348 "config": [ 00:04:13.348 { 00:04:13.348 "method": "fsdev_set_opts", 00:04:13.348 "params": { 00:04:13.348 "fsdev_io_pool_size": 65535, 00:04:13.348 "fsdev_io_cache_size": 256 00:04:13.348 } 00:04:13.348 } 00:04:13.348 ] 00:04:13.348 }, 00:04:13.348 { 00:04:13.348 "subsystem": "vfio_user_target", 00:04:13.348 "config": null 00:04:13.348 }, 00:04:13.348 { 00:04:13.348 "subsystem": "keyring", 00:04:13.348 "config": [] 00:04:13.348 }, 00:04:13.348 { 00:04:13.348 "subsystem": "iobuf", 00:04:13.348 "config": [ 00:04:13.348 { 00:04:13.348 "method": "iobuf_set_options", 00:04:13.348 "params": { 00:04:13.348 "small_pool_count": 8192, 00:04:13.348 "large_pool_count": 1024, 00:04:13.348 "small_bufsize": 8192, 00:04:13.348 "large_bufsize": 135168, 00:04:13.348 "enable_numa": false 00:04:13.348 } 00:04:13.348 } 00:04:13.348 ] 00:04:13.348 }, 00:04:13.348 { 00:04:13.348 "subsystem": "sock", 00:04:13.348 "config": [ 00:04:13.348 { 00:04:13.348 "method": "sock_set_default_impl", 00:04:13.348 "params": { 00:04:13.348 "impl_name": "posix" 00:04:13.348 } 00:04:13.348 }, 00:04:13.348 { 00:04:13.348 "method": "sock_impl_set_options", 00:04:13.348 "params": { 00:04:13.348 "impl_name": "ssl", 00:04:13.348 "recv_buf_size": 4096, 00:04:13.348 "send_buf_size": 4096, 00:04:13.348 "enable_recv_pipe": true, 00:04:13.348 "enable_quickack": false, 00:04:13.348 "enable_placement_id": 0, 00:04:13.348 "enable_zerocopy_send_server": true, 00:04:13.348 "enable_zerocopy_send_client": false, 00:04:13.348 "zerocopy_threshold": 0, 00:04:13.348 "tls_version": 0, 00:04:13.348 "enable_ktls": false 00:04:13.348 } 00:04:13.348 }, 00:04:13.348 { 00:04:13.348 "method": "sock_impl_set_options", 00:04:13.348 "params": { 00:04:13.348 "impl_name": "posix", 00:04:13.348 "recv_buf_size": 2097152, 00:04:13.348 "send_buf_size": 2097152, 00:04:13.348 "enable_recv_pipe": true, 00:04:13.348 "enable_quickack": false, 00:04:13.348 "enable_placement_id": 0, 00:04:13.348 "enable_zerocopy_send_server": true, 00:04:13.348 "enable_zerocopy_send_client": false, 00:04:13.348 "zerocopy_threshold": 0, 00:04:13.348 "tls_version": 0, 00:04:13.348 "enable_ktls": false 00:04:13.348 } 00:04:13.348 } 00:04:13.348 ] 00:04:13.348 }, 00:04:13.348 { 00:04:13.348 "subsystem": "vmd", 00:04:13.348 "config": [] 00:04:13.348 }, 00:04:13.348 { 00:04:13.349 "subsystem": "accel", 00:04:13.349 "config": [ 00:04:13.349 { 00:04:13.349 "method": "accel_set_options", 00:04:13.349 "params": { 00:04:13.349 "small_cache_size": 128, 00:04:13.349 "large_cache_size": 16, 00:04:13.349 "task_count": 2048, 00:04:13.349 "sequence_count": 2048, 00:04:13.349 "buf_count": 2048 00:04:13.349 } 00:04:13.349 } 00:04:13.349 ] 00:04:13.349 }, 00:04:13.349 { 00:04:13.349 "subsystem": "bdev", 00:04:13.349 "config": [ 00:04:13.349 { 00:04:13.349 "method": "bdev_set_options", 00:04:13.349 "params": { 00:04:13.349 "bdev_io_pool_size": 65535, 00:04:13.349 "bdev_io_cache_size": 256, 00:04:13.349 "bdev_auto_examine": true, 00:04:13.349 "iobuf_small_cache_size": 128, 00:04:13.349 "iobuf_large_cache_size": 16 00:04:13.349 } 00:04:13.349 }, 00:04:13.349 { 00:04:13.349 "method": "bdev_raid_set_options", 00:04:13.349 "params": { 00:04:13.349 "process_window_size_kb": 1024, 00:04:13.349 "process_max_bandwidth_mb_sec": 0 00:04:13.349 } 00:04:13.349 }, 00:04:13.349 { 00:04:13.349 "method": "bdev_iscsi_set_options", 00:04:13.349 "params": { 00:04:13.349 "timeout_sec": 30 00:04:13.349 } 00:04:13.349 }, 00:04:13.349 { 00:04:13.349 "method": "bdev_nvme_set_options", 00:04:13.349 "params": { 00:04:13.349 "action_on_timeout": "none", 00:04:13.349 "timeout_us": 0, 00:04:13.349 "timeout_admin_us": 0, 00:04:13.349 "keep_alive_timeout_ms": 10000, 00:04:13.349 "arbitration_burst": 0, 00:04:13.349 "low_priority_weight": 0, 00:04:13.349 "medium_priority_weight": 0, 00:04:13.349 "high_priority_weight": 0, 00:04:13.349 "nvme_adminq_poll_period_us": 10000, 00:04:13.349 "nvme_ioq_poll_period_us": 0, 00:04:13.349 "io_queue_requests": 0, 00:04:13.349 "delay_cmd_submit": true, 00:04:13.349 "transport_retry_count": 4, 00:04:13.349 "bdev_retry_count": 3, 00:04:13.349 "transport_ack_timeout": 0, 00:04:13.349 "ctrlr_loss_timeout_sec": 0, 00:04:13.349 "reconnect_delay_sec": 0, 00:04:13.349 "fast_io_fail_timeout_sec": 0, 00:04:13.349 "disable_auto_failback": false, 00:04:13.349 "generate_uuids": false, 00:04:13.349 "transport_tos": 0, 00:04:13.349 "nvme_error_stat": false, 00:04:13.349 "rdma_srq_size": 0, 00:04:13.349 "io_path_stat": false, 00:04:13.349 "allow_accel_sequence": false, 00:04:13.349 "rdma_max_cq_size": 0, 00:04:13.349 "rdma_cm_event_timeout_ms": 0, 00:04:13.349 "dhchap_digests": [ 00:04:13.349 "sha256", 00:04:13.349 "sha384", 00:04:13.349 "sha512" 00:04:13.349 ], 00:04:13.349 "dhchap_dhgroups": [ 00:04:13.349 "null", 00:04:13.349 "ffdhe2048", 00:04:13.349 "ffdhe3072", 00:04:13.349 "ffdhe4096", 00:04:13.349 "ffdhe6144", 00:04:13.349 "ffdhe8192" 00:04:13.349 ] 00:04:13.349 } 00:04:13.349 }, 00:04:13.349 { 00:04:13.349 "method": "bdev_nvme_set_hotplug", 00:04:13.349 "params": { 00:04:13.349 "period_us": 100000, 00:04:13.349 "enable": false 00:04:13.349 } 00:04:13.349 }, 00:04:13.349 { 00:04:13.349 "method": "bdev_wait_for_examine" 00:04:13.349 } 00:04:13.349 ] 00:04:13.349 }, 00:04:13.349 { 00:04:13.349 "subsystem": "scsi", 00:04:13.349 "config": null 00:04:13.349 }, 00:04:13.349 { 00:04:13.349 "subsystem": "scheduler", 00:04:13.349 "config": [ 00:04:13.349 { 00:04:13.349 "method": "framework_set_scheduler", 00:04:13.349 "params": { 00:04:13.349 "name": "static" 00:04:13.349 } 00:04:13.349 } 00:04:13.349 ] 00:04:13.349 }, 00:04:13.349 { 00:04:13.349 "subsystem": "vhost_scsi", 00:04:13.349 "config": [] 00:04:13.349 }, 00:04:13.349 { 00:04:13.349 "subsystem": "vhost_blk", 00:04:13.349 "config": [] 00:04:13.349 }, 00:04:13.349 { 00:04:13.349 "subsystem": "ublk", 00:04:13.349 "config": [] 00:04:13.349 }, 00:04:13.349 { 00:04:13.349 "subsystem": "nbd", 00:04:13.349 "config": [] 00:04:13.349 }, 00:04:13.349 { 00:04:13.349 "subsystem": "nvmf", 00:04:13.349 "config": [ 00:04:13.349 { 00:04:13.349 "method": "nvmf_set_config", 00:04:13.349 "params": { 00:04:13.349 "discovery_filter": "match_any", 00:04:13.349 "admin_cmd_passthru": { 00:04:13.349 "identify_ctrlr": false 00:04:13.349 }, 00:04:13.349 "dhchap_digests": [ 00:04:13.349 "sha256", 00:04:13.349 "sha384", 00:04:13.349 "sha512" 00:04:13.349 ], 00:04:13.349 "dhchap_dhgroups": [ 00:04:13.349 "null", 00:04:13.349 "ffdhe2048", 00:04:13.349 "ffdhe3072", 00:04:13.349 "ffdhe4096", 00:04:13.349 "ffdhe6144", 00:04:13.349 "ffdhe8192" 00:04:13.349 ] 00:04:13.349 } 00:04:13.349 }, 00:04:13.349 { 00:04:13.349 "method": "nvmf_set_max_subsystems", 00:04:13.349 "params": { 00:04:13.349 "max_subsystems": 1024 00:04:13.349 } 00:04:13.349 }, 00:04:13.349 { 00:04:13.349 "method": "nvmf_set_crdt", 00:04:13.349 "params": { 00:04:13.349 "crdt1": 0, 00:04:13.349 "crdt2": 0, 00:04:13.349 "crdt3": 0 00:04:13.349 } 00:04:13.349 }, 00:04:13.349 { 00:04:13.349 "method": "nvmf_create_transport", 00:04:13.349 "params": { 00:04:13.349 "trtype": "TCP", 00:04:13.349 "max_queue_depth": 128, 00:04:13.349 "max_io_qpairs_per_ctrlr": 127, 00:04:13.349 "in_capsule_data_size": 4096, 00:04:13.349 "max_io_size": 131072, 00:04:13.349 "io_unit_size": 131072, 00:04:13.349 "max_aq_depth": 128, 00:04:13.349 "num_shared_buffers": 511, 00:04:13.349 "buf_cache_size": 4294967295, 00:04:13.349 "dif_insert_or_strip": false, 00:04:13.349 "zcopy": false, 00:04:13.349 "c2h_success": true, 00:04:13.349 "sock_priority": 0, 00:04:13.349 "abort_timeout_sec": 1, 00:04:13.349 "ack_timeout": 0, 00:04:13.349 "data_wr_pool_size": 0 00:04:13.349 } 00:04:13.349 } 00:04:13.349 ] 00:04:13.349 }, 00:04:13.349 { 00:04:13.349 "subsystem": "iscsi", 00:04:13.349 "config": [ 00:04:13.349 { 00:04:13.349 "method": "iscsi_set_options", 00:04:13.349 "params": { 00:04:13.349 "node_base": "iqn.2016-06.io.spdk", 00:04:13.349 "max_sessions": 128, 00:04:13.349 "max_connections_per_session": 2, 00:04:13.349 "max_queue_depth": 64, 00:04:13.349 "default_time2wait": 2, 00:04:13.349 "default_time2retain": 20, 00:04:13.349 "first_burst_length": 8192, 00:04:13.349 "immediate_data": true, 00:04:13.349 "allow_duplicated_isid": false, 00:04:13.349 "error_recovery_level": 0, 00:04:13.349 "nop_timeout": 60, 00:04:13.349 "nop_in_interval": 30, 00:04:13.349 "disable_chap": false, 00:04:13.349 "require_chap": false, 00:04:13.349 "mutual_chap": false, 00:04:13.349 "chap_group": 0, 00:04:13.349 "max_large_datain_per_connection": 64, 00:04:13.349 "max_r2t_per_connection": 4, 00:04:13.349 "pdu_pool_size": 36864, 00:04:13.349 "immediate_data_pool_size": 16384, 00:04:13.349 "data_out_pool_size": 2048 00:04:13.349 } 00:04:13.349 } 00:04:13.349 ] 00:04:13.349 } 00:04:13.349 ] 00:04:13.349 } 00:04:13.349 11:05:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:13.349 11:05:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2493650 00:04:13.349 11:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2493650 ']' 00:04:13.349 11:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2493650 00:04:13.349 11:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:13.349 11:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:13.349 11:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2493650 00:04:13.349 11:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:13.349 11:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:13.349 11:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2493650' 00:04:13.349 killing process with pid 2493650 00:04:13.349 11:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2493650 00:04:13.349 11:05:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2493650 00:04:13.609 11:05:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2493989 00:04:13.609 11:05:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:13.609 11:05:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:18.891 11:05:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2493989 00:04:18.892 11:05:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2493989 ']' 00:04:18.892 11:05:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2493989 00:04:18.892 11:05:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:18.892 11:05:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:18.892 11:05:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2493989 00:04:18.892 11:05:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:18.892 11:05:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:18.892 11:05:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2493989' 00:04:18.892 killing process with pid 2493989 00:04:18.892 11:05:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2493989 00:04:18.892 11:05:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2493989 00:04:18.892 11:05:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:18.892 11:05:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:18.892 00:04:18.892 real 0m6.544s 00:04:18.892 user 0m6.481s 00:04:18.892 sys 0m0.534s 00:04:18.892 11:05:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.892 11:05:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:18.892 ************************************ 00:04:18.892 END TEST skip_rpc_with_json 00:04:18.892 ************************************ 00:04:18.892 11:05:11 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:18.892 11:05:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.892 11:05:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.892 11:05:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.892 ************************************ 00:04:18.892 START TEST skip_rpc_with_delay 00:04:18.892 ************************************ 00:04:18.892 11:05:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:18.892 11:05:11 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:18.892 11:05:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:18.892 11:05:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:18.892 11:05:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:18.892 11:05:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:18.892 11:05:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:18.892 11:05:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:18.892 11:05:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:18.892 11:05:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:18.892 11:05:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:18.892 11:05:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:18.892 11:05:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:18.892 [2024-11-20 11:05:11.563870] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:18.892 11:05:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:18.892 11:05:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:18.892 11:05:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:18.892 11:05:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:18.892 00:04:18.892 real 0m0.076s 00:04:18.892 user 0m0.043s 00:04:18.892 sys 0m0.032s 00:04:18.892 11:05:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.892 11:05:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:18.892 ************************************ 00:04:18.892 END TEST skip_rpc_with_delay 00:04:18.892 ************************************ 00:04:18.892 11:05:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:18.892 11:05:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:18.892 11:05:11 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:18.892 11:05:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.892 11:05:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.892 11:05:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.153 ************************************ 00:04:19.153 START TEST exit_on_failed_rpc_init 00:04:19.153 ************************************ 00:04:19.153 11:05:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:19.153 11:05:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2495050 00:04:19.153 11:05:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2495050 00:04:19.153 11:05:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:19.153 11:05:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2495050 ']' 00:04:19.153 11:05:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:19.153 11:05:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:19.153 11:05:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:19.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:19.153 11:05:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:19.153 11:05:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:19.153 [2024-11-20 11:05:11.718677] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:04:19.153 [2024-11-20 11:05:11.718734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2495050 ] 00:04:19.153 [2024-11-20 11:05:11.804343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.153 [2024-11-20 11:05:11.839774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.096 11:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:20.096 11:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:20.096 11:05:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:20.096 11:05:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:20.096 11:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:20.096 11:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:20.096 11:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.096 11:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:20.096 11:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.096 11:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:20.096 11:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.096 11:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:20.096 11:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.096 11:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:20.096 11:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:20.096 [2024-11-20 11:05:12.580805] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:04:20.096 [2024-11-20 11:05:12.580858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2495324 ] 00:04:20.096 [2024-11-20 11:05:12.667183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.096 [2024-11-20 11:05:12.702781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:20.096 [2024-11-20 11:05:12.702831] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:20.096 [2024-11-20 11:05:12.702841] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:20.096 [2024-11-20 11:05:12.702847] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:20.097 11:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:20.097 11:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:20.097 11:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:20.097 11:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:20.097 11:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:20.097 11:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:20.097 11:05:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:20.097 11:05:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2495050 00:04:20.097 11:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2495050 ']' 00:04:20.097 11:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2495050 00:04:20.097 11:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:20.097 11:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:20.097 11:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2495050 00:04:20.097 11:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:20.097 11:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:20.097 11:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2495050' 00:04:20.097 killing process with pid 2495050 00:04:20.097 11:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2495050 00:04:20.097 11:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2495050 00:04:20.357 00:04:20.357 real 0m1.334s 00:04:20.357 user 0m1.578s 00:04:20.357 sys 0m0.372s 00:04:20.357 11:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.357 11:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:20.357 ************************************ 00:04:20.357 END TEST exit_on_failed_rpc_init 00:04:20.357 ************************************ 00:04:20.357 11:05:13 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:20.357 00:04:20.357 real 0m13.735s 00:04:20.357 user 0m13.352s 00:04:20.357 sys 0m1.547s 00:04:20.357 11:05:13 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.357 11:05:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.357 ************************************ 00:04:20.357 END TEST skip_rpc 00:04:20.357 ************************************ 00:04:20.357 11:05:13 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:20.357 11:05:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.357 11:05:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.357 11:05:13 -- common/autotest_common.sh@10 -- # set +x 00:04:20.619 ************************************ 00:04:20.619 START TEST rpc_client 00:04:20.619 ************************************ 00:04:20.619 11:05:13 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:20.619 * Looking for test storage... 00:04:20.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:20.619 11:05:13 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:20.619 11:05:13 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:20.619 11:05:13 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:20.619 11:05:13 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:20.619 11:05:13 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.619 11:05:13 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.619 11:05:13 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.619 11:05:13 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.619 11:05:13 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.619 11:05:13 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.619 11:05:13 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.619 11:05:13 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.619 11:05:13 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.619 11:05:13 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.619 11:05:13 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.619 11:05:13 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:20.619 11:05:13 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:20.619 11:05:13 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.619 11:05:13 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.619 11:05:13 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:20.619 11:05:13 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:20.619 11:05:13 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.619 11:05:13 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:20.619 11:05:13 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.619 11:05:13 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:20.619 11:05:13 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:20.619 11:05:13 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.619 11:05:13 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:20.619 11:05:13 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.619 11:05:13 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.619 11:05:13 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.619 11:05:13 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:20.619 11:05:13 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.619 11:05:13 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:20.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.619 --rc genhtml_branch_coverage=1 00:04:20.619 --rc genhtml_function_coverage=1 00:04:20.619 --rc genhtml_legend=1 00:04:20.619 --rc geninfo_all_blocks=1 00:04:20.619 --rc geninfo_unexecuted_blocks=1 00:04:20.619 00:04:20.619 ' 00:04:20.620 11:05:13 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:20.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.620 --rc genhtml_branch_coverage=1 00:04:20.620 --rc genhtml_function_coverage=1 00:04:20.620 --rc genhtml_legend=1 00:04:20.620 --rc geninfo_all_blocks=1 00:04:20.620 --rc geninfo_unexecuted_blocks=1 00:04:20.620 00:04:20.620 ' 00:04:20.620 11:05:13 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:20.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.620 --rc genhtml_branch_coverage=1 00:04:20.620 --rc genhtml_function_coverage=1 00:04:20.620 --rc genhtml_legend=1 00:04:20.620 --rc geninfo_all_blocks=1 00:04:20.620 --rc geninfo_unexecuted_blocks=1 00:04:20.620 00:04:20.620 ' 00:04:20.620 11:05:13 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:20.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.620 --rc genhtml_branch_coverage=1 00:04:20.620 --rc genhtml_function_coverage=1 00:04:20.620 --rc genhtml_legend=1 00:04:20.620 --rc geninfo_all_blocks=1 00:04:20.620 --rc geninfo_unexecuted_blocks=1 00:04:20.620 00:04:20.620 ' 00:04:20.620 11:05:13 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:20.620 OK 00:04:20.620 11:05:13 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:20.620 00:04:20.620 real 0m0.225s 00:04:20.620 user 0m0.142s 00:04:20.620 sys 0m0.097s 00:04:20.620 11:05:13 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.620 11:05:13 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:20.620 ************************************ 00:04:20.620 END TEST rpc_client 00:04:20.620 ************************************ 00:04:20.881 11:05:13 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:20.881 11:05:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.881 11:05:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.881 11:05:13 -- common/autotest_common.sh@10 -- # set +x 00:04:20.881 ************************************ 00:04:20.881 START TEST json_config 00:04:20.881 ************************************ 00:04:20.881 11:05:13 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:20.881 11:05:13 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:20.881 11:05:13 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:20.881 11:05:13 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:20.881 11:05:13 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:20.881 11:05:13 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.881 11:05:13 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.881 11:05:13 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.882 11:05:13 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.882 11:05:13 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.882 11:05:13 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.882 11:05:13 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.882 11:05:13 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.882 11:05:13 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.882 11:05:13 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.882 11:05:13 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.882 11:05:13 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:20.882 11:05:13 json_config -- scripts/common.sh@345 -- # : 1 00:04:20.882 11:05:13 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.882 11:05:13 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.882 11:05:13 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:20.882 11:05:13 json_config -- scripts/common.sh@353 -- # local d=1 00:04:20.882 11:05:13 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.882 11:05:13 json_config -- scripts/common.sh@355 -- # echo 1 00:04:20.882 11:05:13 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.882 11:05:13 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:20.882 11:05:13 json_config -- scripts/common.sh@353 -- # local d=2 00:04:20.882 11:05:13 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.882 11:05:13 json_config -- scripts/common.sh@355 -- # echo 2 00:04:20.882 11:05:13 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.882 11:05:13 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.882 11:05:13 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.882 11:05:13 json_config -- scripts/common.sh@368 -- # return 0 00:04:20.882 11:05:13 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.882 11:05:13 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:20.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.882 --rc genhtml_branch_coverage=1 00:04:20.882 --rc genhtml_function_coverage=1 00:04:20.882 --rc genhtml_legend=1 00:04:20.882 --rc geninfo_all_blocks=1 00:04:20.882 --rc geninfo_unexecuted_blocks=1 00:04:20.882 00:04:20.882 ' 00:04:20.882 11:05:13 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:20.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.882 --rc genhtml_branch_coverage=1 00:04:20.882 --rc genhtml_function_coverage=1 00:04:20.882 --rc genhtml_legend=1 00:04:20.882 --rc geninfo_all_blocks=1 00:04:20.882 --rc geninfo_unexecuted_blocks=1 00:04:20.882 00:04:20.882 ' 00:04:20.882 11:05:13 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:20.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.882 --rc genhtml_branch_coverage=1 00:04:20.882 --rc genhtml_function_coverage=1 00:04:20.882 --rc genhtml_legend=1 00:04:20.882 --rc geninfo_all_blocks=1 00:04:20.882 --rc geninfo_unexecuted_blocks=1 00:04:20.882 00:04:20.882 ' 00:04:20.882 11:05:13 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:20.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.882 --rc genhtml_branch_coverage=1 00:04:20.882 --rc genhtml_function_coverage=1 00:04:20.882 --rc genhtml_legend=1 00:04:20.882 --rc geninfo_all_blocks=1 00:04:20.882 --rc geninfo_unexecuted_blocks=1 00:04:20.882 00:04:20.882 ' 00:04:20.882 11:05:13 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:20.882 11:05:13 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:20.882 11:05:13 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:20.882 11:05:13 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:20.882 11:05:13 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:20.882 11:05:13 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:20.882 11:05:13 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:20.882 11:05:13 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:20.882 11:05:13 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:20.882 11:05:13 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:20.882 11:05:13 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:20.882 11:05:13 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:20.882 11:05:13 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:20.882 11:05:13 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:20.882 11:05:13 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:20.882 11:05:13 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:20.882 11:05:13 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:20.882 11:05:13 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:20.882 11:05:13 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:20.882 11:05:13 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:20.882 11:05:13 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:20.882 11:05:13 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:20.882 11:05:13 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:20.882 11:05:13 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.882 11:05:13 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.882 11:05:13 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.882 11:05:13 json_config -- paths/export.sh@5 -- # export PATH 00:04:20.882 11:05:13 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:20.882 11:05:13 json_config -- nvmf/common.sh@51 -- # : 0 00:04:20.882 11:05:13 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:20.882 11:05:13 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:20.882 11:05:13 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:20.882 11:05:13 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:20.882 11:05:13 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:20.882 11:05:13 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:20.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:20.882 11:05:13 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:20.882 11:05:13 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:20.882 11:05:13 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:21.143 11:05:13 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:21.143 11:05:13 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:21.143 11:05:13 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:21.143 11:05:13 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:21.143 11:05:13 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:21.143 11:05:13 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:21.143 11:05:13 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:21.143 11:05:13 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:21.143 11:05:13 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:21.143 11:05:13 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:21.143 11:05:13 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:21.143 11:05:13 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:21.143 11:05:13 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:21.143 11:05:13 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:21.143 11:05:13 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:21.143 11:05:13 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:21.143 INFO: JSON configuration test init 00:04:21.143 11:05:13 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:21.143 11:05:13 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:21.143 11:05:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:21.143 11:05:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.143 11:05:13 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:21.143 11:05:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:21.143 11:05:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.143 11:05:13 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:21.143 11:05:13 json_config -- json_config/common.sh@9 -- # local app=target 00:04:21.143 11:05:13 json_config -- json_config/common.sh@10 -- # shift 00:04:21.143 11:05:13 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:21.143 11:05:13 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:21.143 11:05:13 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:21.143 11:05:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:21.143 11:05:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:21.143 11:05:13 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2495530 00:04:21.143 11:05:13 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:21.143 Waiting for target to run... 00:04:21.143 11:05:13 json_config -- json_config/common.sh@25 -- # waitforlisten 2495530 /var/tmp/spdk_tgt.sock 00:04:21.143 11:05:13 json_config -- common/autotest_common.sh@835 -- # '[' -z 2495530 ']' 00:04:21.143 11:05:13 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:21.143 11:05:13 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:21.143 11:05:13 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:21.143 11:05:13 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:21.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:21.143 11:05:13 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:21.143 11:05:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.143 [2024-11-20 11:05:13.703498] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:04:21.143 [2024-11-20 11:05:13.703569] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2495530 ] 00:04:21.717 [2024-11-20 11:05:14.151889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.717 [2024-11-20 11:05:14.185419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.978 11:05:14 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:21.978 11:05:14 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:21.978 11:05:14 json_config -- json_config/common.sh@26 -- # echo '' 00:04:21.978 00:04:21.978 11:05:14 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:21.978 11:05:14 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:21.978 11:05:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:21.978 11:05:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.978 11:05:14 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:21.978 11:05:14 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:21.978 11:05:14 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:21.978 11:05:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.978 11:05:14 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:21.978 11:05:14 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:21.978 11:05:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:22.550 11:05:15 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:22.550 11:05:15 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:22.550 11:05:15 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:22.550 11:05:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.550 11:05:15 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:22.550 11:05:15 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:22.550 11:05:15 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:22.550 11:05:15 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:22.550 11:05:15 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:22.550 11:05:15 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:22.550 11:05:15 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:22.550 11:05:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:22.812 11:05:15 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:22.812 11:05:15 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:22.812 11:05:15 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:22.812 11:05:15 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:22.812 11:05:15 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:22.812 11:05:15 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:22.812 11:05:15 json_config -- json_config/json_config.sh@54 -- # sort 00:04:22.812 11:05:15 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:22.812 11:05:15 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:22.812 11:05:15 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:22.812 11:05:15 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:22.812 11:05:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.812 11:05:15 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:22.812 11:05:15 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:22.812 11:05:15 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:22.812 11:05:15 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:22.812 11:05:15 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:22.812 11:05:15 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:22.812 11:05:15 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:22.812 11:05:15 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:22.812 11:05:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.812 11:05:15 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:22.812 11:05:15 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:22.812 11:05:15 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:22.812 11:05:15 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:22.812 11:05:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:22.812 MallocForNvmf0 00:04:22.812 11:05:15 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:22.812 11:05:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:23.073 MallocForNvmf1 00:04:23.073 11:05:15 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:23.073 11:05:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:23.334 [2024-11-20 11:05:15.865405] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:23.334 11:05:15 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:23.334 11:05:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:23.595 11:05:16 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:23.595 11:05:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:23.595 11:05:16 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:23.595 11:05:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:23.857 11:05:16 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:23.857 11:05:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:23.858 [2024-11-20 11:05:16.579562] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:24.122 11:05:16 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:24.122 11:05:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:24.122 11:05:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.122 11:05:16 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:24.122 11:05:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:24.122 11:05:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.122 11:05:16 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:24.122 11:05:16 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:24.122 11:05:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:24.122 MallocBdevForConfigChangeCheck 00:04:24.383 11:05:16 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:24.383 11:05:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:24.383 11:05:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.383 11:05:16 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:24.383 11:05:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:24.644 11:05:17 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:24.644 INFO: shutting down applications... 00:04:24.644 11:05:17 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:24.644 11:05:17 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:24.644 11:05:17 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:24.644 11:05:17 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:24.906 Calling clear_iscsi_subsystem 00:04:24.906 Calling clear_nvmf_subsystem 00:04:24.906 Calling clear_nbd_subsystem 00:04:24.906 Calling clear_ublk_subsystem 00:04:24.906 Calling clear_vhost_blk_subsystem 00:04:24.906 Calling clear_vhost_scsi_subsystem 00:04:24.906 Calling clear_bdev_subsystem 00:04:25.167 11:05:17 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:25.167 11:05:17 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:25.167 11:05:17 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:25.167 11:05:17 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:25.167 11:05:17 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:25.167 11:05:17 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:25.427 11:05:18 json_config -- json_config/json_config.sh@352 -- # break 00:04:25.427 11:05:18 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:25.427 11:05:18 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:25.427 11:05:18 json_config -- json_config/common.sh@31 -- # local app=target 00:04:25.427 11:05:18 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:25.427 11:05:18 json_config -- json_config/common.sh@35 -- # [[ -n 2495530 ]] 00:04:25.427 11:05:18 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2495530 00:04:25.427 11:05:18 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:25.427 11:05:18 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:25.427 11:05:18 json_config -- json_config/common.sh@41 -- # kill -0 2495530 00:04:25.427 11:05:18 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:25.999 11:05:18 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:25.999 11:05:18 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:25.999 11:05:18 json_config -- json_config/common.sh@41 -- # kill -0 2495530 00:04:25.999 11:05:18 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:25.999 11:05:18 json_config -- json_config/common.sh@43 -- # break 00:04:25.999 11:05:18 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:25.999 11:05:18 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:25.999 SPDK target shutdown done 00:04:25.999 11:05:18 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:25.999 INFO: relaunching applications... 00:04:25.999 11:05:18 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:25.999 11:05:18 json_config -- json_config/common.sh@9 -- # local app=target 00:04:25.999 11:05:18 json_config -- json_config/common.sh@10 -- # shift 00:04:25.999 11:05:18 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:25.999 11:05:18 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:25.999 11:05:18 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:25.999 11:05:18 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:25.999 11:05:18 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:25.999 11:05:18 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2496667 00:04:25.999 11:05:18 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:25.999 Waiting for target to run... 00:04:25.999 11:05:18 json_config -- json_config/common.sh@25 -- # waitforlisten 2496667 /var/tmp/spdk_tgt.sock 00:04:25.999 11:05:18 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:25.999 11:05:18 json_config -- common/autotest_common.sh@835 -- # '[' -z 2496667 ']' 00:04:25.999 11:05:18 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:25.999 11:05:18 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:25.999 11:05:18 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:25.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:25.999 11:05:18 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:25.999 11:05:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.999 [2024-11-20 11:05:18.570989] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:04:25.999 [2024-11-20 11:05:18.571047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2496667 ] 00:04:26.259 [2024-11-20 11:05:18.927316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.259 [2024-11-20 11:05:18.960473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.831 [2024-11-20 11:05:19.459852] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:26.831 [2024-11-20 11:05:19.492228] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:26.831 11:05:19 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:26.831 11:05:19 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:26.831 11:05:19 json_config -- json_config/common.sh@26 -- # echo '' 00:04:26.831 00:04:26.831 11:05:19 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:26.831 11:05:19 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:26.831 INFO: Checking if target configuration is the same... 00:04:26.831 11:05:19 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:26.831 11:05:19 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:26.831 11:05:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:26.831 + '[' 2 -ne 2 ']' 00:04:26.831 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:26.831 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:26.831 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:26.831 +++ basename /dev/fd/62 00:04:26.831 ++ mktemp /tmp/62.XXX 00:04:26.831 + tmp_file_1=/tmp/62.Ntx 00:04:26.831 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:26.831 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:26.831 + tmp_file_2=/tmp/spdk_tgt_config.json.7Re 00:04:26.831 + ret=0 00:04:26.831 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:27.402 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:27.402 + diff -u /tmp/62.Ntx /tmp/spdk_tgt_config.json.7Re 00:04:27.402 + echo 'INFO: JSON config files are the same' 00:04:27.402 INFO: JSON config files are the same 00:04:27.402 + rm /tmp/62.Ntx /tmp/spdk_tgt_config.json.7Re 00:04:27.402 + exit 0 00:04:27.402 11:05:19 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:27.402 11:05:19 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:27.402 INFO: changing configuration and checking if this can be detected... 00:04:27.402 11:05:19 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:27.402 11:05:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:27.402 11:05:20 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:27.402 11:05:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:27.402 11:05:20 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:27.402 + '[' 2 -ne 2 ']' 00:04:27.402 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:27.402 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:27.402 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:27.402 +++ basename /dev/fd/62 00:04:27.402 ++ mktemp /tmp/62.XXX 00:04:27.402 + tmp_file_1=/tmp/62.H6F 00:04:27.402 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:27.402 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:27.402 + tmp_file_2=/tmp/spdk_tgt_config.json.UQ3 00:04:27.402 + ret=0 00:04:27.402 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:27.974 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:27.974 + diff -u /tmp/62.H6F /tmp/spdk_tgt_config.json.UQ3 00:04:27.974 + ret=1 00:04:27.974 + echo '=== Start of file: /tmp/62.H6F ===' 00:04:27.974 + cat /tmp/62.H6F 00:04:27.974 + echo '=== End of file: /tmp/62.H6F ===' 00:04:27.974 + echo '' 00:04:27.974 + echo '=== Start of file: /tmp/spdk_tgt_config.json.UQ3 ===' 00:04:27.974 + cat /tmp/spdk_tgt_config.json.UQ3 00:04:27.974 + echo '=== End of file: /tmp/spdk_tgt_config.json.UQ3 ===' 00:04:27.974 + echo '' 00:04:27.974 + rm /tmp/62.H6F /tmp/spdk_tgt_config.json.UQ3 00:04:27.974 + exit 1 00:04:27.974 11:05:20 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:27.974 INFO: configuration change detected. 00:04:27.974 11:05:20 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:27.974 11:05:20 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:27.974 11:05:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:27.974 11:05:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.974 11:05:20 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:27.974 11:05:20 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:27.974 11:05:20 json_config -- json_config/json_config.sh@324 -- # [[ -n 2496667 ]] 00:04:27.974 11:05:20 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:27.974 11:05:20 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:27.974 11:05:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:27.974 11:05:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.974 11:05:20 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:27.974 11:05:20 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:27.974 11:05:20 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:27.974 11:05:20 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:27.974 11:05:20 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:27.974 11:05:20 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:27.974 11:05:20 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:27.974 11:05:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.974 11:05:20 json_config -- json_config/json_config.sh@330 -- # killprocess 2496667 00:04:27.974 11:05:20 json_config -- common/autotest_common.sh@954 -- # '[' -z 2496667 ']' 00:04:27.974 11:05:20 json_config -- common/autotest_common.sh@958 -- # kill -0 2496667 00:04:27.974 11:05:20 json_config -- common/autotest_common.sh@959 -- # uname 00:04:27.974 11:05:20 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:27.974 11:05:20 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2496667 00:04:27.974 11:05:20 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:27.974 11:05:20 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:27.974 11:05:20 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2496667' 00:04:27.974 killing process with pid 2496667 00:04:27.974 11:05:20 json_config -- common/autotest_common.sh@973 -- # kill 2496667 00:04:27.974 11:05:20 json_config -- common/autotest_common.sh@978 -- # wait 2496667 00:04:28.237 11:05:20 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:28.237 11:05:20 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:28.237 11:05:20 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:28.237 11:05:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.237 11:05:20 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:28.237 11:05:20 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:28.237 INFO: Success 00:04:28.237 00:04:28.237 real 0m7.499s 00:04:28.237 user 0m8.861s 00:04:28.237 sys 0m2.211s 00:04:28.237 11:05:20 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.237 11:05:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.237 ************************************ 00:04:28.237 END TEST json_config 00:04:28.237 ************************************ 00:04:28.237 11:05:20 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:28.237 11:05:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.237 11:05:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.237 11:05:20 -- common/autotest_common.sh@10 -- # set +x 00:04:28.499 ************************************ 00:04:28.499 START TEST json_config_extra_key 00:04:28.499 ************************************ 00:04:28.499 11:05:20 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:28.499 11:05:21 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:28.499 11:05:21 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:28.499 11:05:21 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:28.499 11:05:21 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:28.499 11:05:21 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.499 11:05:21 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.499 11:05:21 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.499 11:05:21 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.499 11:05:21 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.499 11:05:21 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.499 11:05:21 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.499 11:05:21 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.499 11:05:21 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.499 11:05:21 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.499 11:05:21 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.499 11:05:21 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:28.499 11:05:21 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:28.499 11:05:21 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.499 11:05:21 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.499 11:05:21 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:28.499 11:05:21 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:28.499 11:05:21 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.499 11:05:21 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:28.499 11:05:21 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.499 11:05:21 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:28.499 11:05:21 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:28.499 11:05:21 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.499 11:05:21 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:28.499 11:05:21 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.499 11:05:21 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.499 11:05:21 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.499 11:05:21 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:28.499 11:05:21 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.499 11:05:21 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:28.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.499 --rc genhtml_branch_coverage=1 00:04:28.499 --rc genhtml_function_coverage=1 00:04:28.499 --rc genhtml_legend=1 00:04:28.499 --rc geninfo_all_blocks=1 00:04:28.499 --rc geninfo_unexecuted_blocks=1 00:04:28.499 00:04:28.499 ' 00:04:28.499 11:05:21 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:28.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.499 --rc genhtml_branch_coverage=1 00:04:28.499 --rc genhtml_function_coverage=1 00:04:28.499 --rc genhtml_legend=1 00:04:28.499 --rc geninfo_all_blocks=1 00:04:28.499 --rc geninfo_unexecuted_blocks=1 00:04:28.499 00:04:28.499 ' 00:04:28.499 11:05:21 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:28.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.499 --rc genhtml_branch_coverage=1 00:04:28.499 --rc genhtml_function_coverage=1 00:04:28.499 --rc genhtml_legend=1 00:04:28.499 --rc geninfo_all_blocks=1 00:04:28.499 --rc geninfo_unexecuted_blocks=1 00:04:28.499 00:04:28.499 ' 00:04:28.499 11:05:21 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:28.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.499 --rc genhtml_branch_coverage=1 00:04:28.499 --rc genhtml_function_coverage=1 00:04:28.499 --rc genhtml_legend=1 00:04:28.499 --rc geninfo_all_blocks=1 00:04:28.499 --rc geninfo_unexecuted_blocks=1 00:04:28.499 00:04:28.499 ' 00:04:28.499 11:05:21 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:28.499 11:05:21 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:28.499 11:05:21 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:28.499 11:05:21 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:28.499 11:05:21 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:28.499 11:05:21 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:28.499 11:05:21 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:28.499 11:05:21 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:28.499 11:05:21 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:28.499 11:05:21 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:28.499 11:05:21 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:28.499 11:05:21 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:28.499 11:05:21 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:28.499 11:05:21 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:28.499 11:05:21 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:28.499 11:05:21 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:28.499 11:05:21 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:28.499 11:05:21 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:28.499 11:05:21 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:28.499 11:05:21 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:28.499 11:05:21 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:28.499 11:05:21 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:28.499 11:05:21 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:28.499 11:05:21 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.499 11:05:21 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.499 11:05:21 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.499 11:05:21 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:28.500 11:05:21 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.500 11:05:21 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:28.500 11:05:21 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:28.500 11:05:21 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:28.500 11:05:21 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:28.500 11:05:21 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:28.500 11:05:21 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:28.500 11:05:21 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:28.500 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:28.500 11:05:21 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:28.500 11:05:21 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:28.500 11:05:21 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:28.500 11:05:21 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:28.500 11:05:21 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:28.500 11:05:21 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:28.500 11:05:21 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:28.500 11:05:21 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:28.500 11:05:21 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:28.500 11:05:21 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:28.500 11:05:21 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:28.500 11:05:21 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:28.500 11:05:21 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:28.500 11:05:21 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:28.500 INFO: launching applications... 00:04:28.500 11:05:21 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:28.500 11:05:21 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:28.500 11:05:21 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:28.500 11:05:21 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:28.500 11:05:21 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:28.500 11:05:21 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:28.500 11:05:21 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:28.500 11:05:21 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:28.500 11:05:21 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2497446 00:04:28.500 11:05:21 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:28.500 Waiting for target to run... 00:04:28.500 11:05:21 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2497446 /var/tmp/spdk_tgt.sock 00:04:28.500 11:05:21 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2497446 ']' 00:04:28.500 11:05:21 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:28.500 11:05:21 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.500 11:05:21 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:28.500 11:05:21 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:28.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:28.500 11:05:21 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.500 11:05:21 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:28.761 [2024-11-20 11:05:21.263044] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:04:28.761 [2024-11-20 11:05:21.263113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2497446 ] 00:04:29.021 [2024-11-20 11:05:21.532533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.021 [2024-11-20 11:05:21.557809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.593 11:05:22 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.593 11:05:22 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:29.593 11:05:22 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:29.593 00:04:29.593 11:05:22 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:29.593 INFO: shutting down applications... 00:04:29.593 11:05:22 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:29.593 11:05:22 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:29.593 11:05:22 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:29.593 11:05:22 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2497446 ]] 00:04:29.593 11:05:22 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2497446 00:04:29.593 11:05:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:29.593 11:05:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:29.593 11:05:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2497446 00:04:29.593 11:05:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:29.855 11:05:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:29.855 11:05:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:29.855 11:05:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2497446 00:04:29.855 11:05:22 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:29.855 11:05:22 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:29.855 11:05:22 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:29.855 11:05:22 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:29.855 SPDK target shutdown done 00:04:29.855 11:05:22 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:29.855 Success 00:04:29.855 00:04:29.855 real 0m1.567s 00:04:29.855 user 0m1.186s 00:04:29.855 sys 0m0.404s 00:04:29.855 11:05:22 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.855 11:05:22 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:29.855 ************************************ 00:04:29.855 END TEST json_config_extra_key 00:04:29.855 ************************************ 00:04:30.116 11:05:22 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:30.116 11:05:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.116 11:05:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.116 11:05:22 -- common/autotest_common.sh@10 -- # set +x 00:04:30.116 ************************************ 00:04:30.116 START TEST alias_rpc 00:04:30.116 ************************************ 00:04:30.116 11:05:22 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:30.116 * Looking for test storage... 00:04:30.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:30.116 11:05:22 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:30.116 11:05:22 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:30.116 11:05:22 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:30.116 11:05:22 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:30.116 11:05:22 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.116 11:05:22 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.116 11:05:22 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.116 11:05:22 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.116 11:05:22 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.116 11:05:22 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.116 11:05:22 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.116 11:05:22 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.116 11:05:22 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.116 11:05:22 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.116 11:05:22 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.116 11:05:22 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:30.116 11:05:22 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:30.116 11:05:22 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.116 11:05:22 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.116 11:05:22 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:30.116 11:05:22 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:30.116 11:05:22 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.116 11:05:22 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:30.116 11:05:22 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.116 11:05:22 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:30.116 11:05:22 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:30.116 11:05:22 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.116 11:05:22 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:30.116 11:05:22 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.116 11:05:22 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.116 11:05:22 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.116 11:05:22 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:30.116 11:05:22 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.116 11:05:22 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:30.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.116 --rc genhtml_branch_coverage=1 00:04:30.116 --rc genhtml_function_coverage=1 00:04:30.116 --rc genhtml_legend=1 00:04:30.116 --rc geninfo_all_blocks=1 00:04:30.116 --rc geninfo_unexecuted_blocks=1 00:04:30.116 00:04:30.116 ' 00:04:30.116 11:05:22 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:30.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.116 --rc genhtml_branch_coverage=1 00:04:30.116 --rc genhtml_function_coverage=1 00:04:30.116 --rc genhtml_legend=1 00:04:30.116 --rc geninfo_all_blocks=1 00:04:30.116 --rc geninfo_unexecuted_blocks=1 00:04:30.116 00:04:30.116 ' 00:04:30.116 11:05:22 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:30.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.116 --rc genhtml_branch_coverage=1 00:04:30.116 --rc genhtml_function_coverage=1 00:04:30.116 --rc genhtml_legend=1 00:04:30.116 --rc geninfo_all_blocks=1 00:04:30.116 --rc geninfo_unexecuted_blocks=1 00:04:30.116 00:04:30.116 ' 00:04:30.116 11:05:22 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:30.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.116 --rc genhtml_branch_coverage=1 00:04:30.116 --rc genhtml_function_coverage=1 00:04:30.116 --rc genhtml_legend=1 00:04:30.116 --rc geninfo_all_blocks=1 00:04:30.116 --rc geninfo_unexecuted_blocks=1 00:04:30.116 00:04:30.116 ' 00:04:30.116 11:05:22 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:30.117 11:05:22 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2497843 00:04:30.117 11:05:22 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2497843 00:04:30.117 11:05:22 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2497843 ']' 00:04:30.117 11:05:22 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:30.117 11:05:22 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.117 11:05:22 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.117 11:05:22 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.117 11:05:22 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.117 11:05:22 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.377 [2024-11-20 11:05:22.905484] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:04:30.377 [2024-11-20 11:05:22.905559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2497843 ] 00:04:30.377 [2024-11-20 11:05:22.993955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.377 [2024-11-20 11:05:23.028653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.318 11:05:23 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.318 11:05:23 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:31.318 11:05:23 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:31.318 11:05:23 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2497843 00:04:31.318 11:05:23 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2497843 ']' 00:04:31.318 11:05:23 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2497843 00:04:31.318 11:05:23 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:31.318 11:05:23 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.318 11:05:23 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2497843 00:04:31.318 11:05:23 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:31.318 11:05:23 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:31.318 11:05:23 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2497843' 00:04:31.318 killing process with pid 2497843 00:04:31.318 11:05:23 alias_rpc -- common/autotest_common.sh@973 -- # kill 2497843 00:04:31.318 11:05:23 alias_rpc -- common/autotest_common.sh@978 -- # wait 2497843 00:04:31.579 00:04:31.579 real 0m1.504s 00:04:31.579 user 0m1.661s 00:04:31.579 sys 0m0.410s 00:04:31.579 11:05:24 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.579 11:05:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.579 ************************************ 00:04:31.579 END TEST alias_rpc 00:04:31.579 ************************************ 00:04:31.579 11:05:24 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:31.579 11:05:24 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:31.579 11:05:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.579 11:05:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.579 11:05:24 -- common/autotest_common.sh@10 -- # set +x 00:04:31.579 ************************************ 00:04:31.579 START TEST spdkcli_tcp 00:04:31.579 ************************************ 00:04:31.579 11:05:24 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:31.579 * Looking for test storage... 00:04:31.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:31.841 11:05:24 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:31.841 11:05:24 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:31.841 11:05:24 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:31.841 11:05:24 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:31.841 11:05:24 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.841 11:05:24 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.841 11:05:24 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.841 11:05:24 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.841 11:05:24 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.841 11:05:24 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.841 11:05:24 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.841 11:05:24 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.841 11:05:24 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.841 11:05:24 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.841 11:05:24 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.841 11:05:24 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:31.841 11:05:24 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:31.841 11:05:24 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.841 11:05:24 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.841 11:05:24 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:31.841 11:05:24 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:31.841 11:05:24 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.841 11:05:24 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:31.841 11:05:24 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.841 11:05:24 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:31.841 11:05:24 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:31.841 11:05:24 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:31.841 11:05:24 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:31.841 11:05:24 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:31.841 11:05:24 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:31.841 11:05:24 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:31.841 11:05:24 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:31.841 11:05:24 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:31.841 11:05:24 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:31.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.841 --rc genhtml_branch_coverage=1 00:04:31.841 --rc genhtml_function_coverage=1 00:04:31.841 --rc genhtml_legend=1 00:04:31.841 --rc geninfo_all_blocks=1 00:04:31.841 --rc geninfo_unexecuted_blocks=1 00:04:31.841 00:04:31.841 ' 00:04:31.841 11:05:24 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:31.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.841 --rc genhtml_branch_coverage=1 00:04:31.841 --rc genhtml_function_coverage=1 00:04:31.841 --rc genhtml_legend=1 00:04:31.841 --rc geninfo_all_blocks=1 00:04:31.841 --rc geninfo_unexecuted_blocks=1 00:04:31.841 00:04:31.841 ' 00:04:31.841 11:05:24 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:31.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.841 --rc genhtml_branch_coverage=1 00:04:31.841 --rc genhtml_function_coverage=1 00:04:31.841 --rc genhtml_legend=1 00:04:31.841 --rc geninfo_all_blocks=1 00:04:31.841 --rc geninfo_unexecuted_blocks=1 00:04:31.841 00:04:31.841 ' 00:04:31.841 11:05:24 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:31.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.841 --rc genhtml_branch_coverage=1 00:04:31.841 --rc genhtml_function_coverage=1 00:04:31.841 --rc genhtml_legend=1 00:04:31.841 --rc geninfo_all_blocks=1 00:04:31.841 --rc geninfo_unexecuted_blocks=1 00:04:31.841 00:04:31.841 ' 00:04:31.841 11:05:24 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:31.841 11:05:24 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:31.841 11:05:24 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:31.841 11:05:24 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:31.841 11:05:24 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:31.841 11:05:24 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:31.841 11:05:24 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:31.841 11:05:24 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:31.841 11:05:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:31.841 11:05:24 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2498244 00:04:31.841 11:05:24 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2498244 00:04:31.841 11:05:24 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:31.841 11:05:24 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2498244 ']' 00:04:31.841 11:05:24 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.841 11:05:24 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:31.841 11:05:24 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.841 11:05:24 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:31.841 11:05:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:31.841 [2024-11-20 11:05:24.491344] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:04:31.841 [2024-11-20 11:05:24.491415] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2498244 ] 00:04:31.841 [2024-11-20 11:05:24.579658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:32.102 [2024-11-20 11:05:24.620959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.102 [2024-11-20 11:05:24.620961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.672 11:05:25 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:32.672 11:05:25 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:32.672 11:05:25 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2498257 00:04:32.672 11:05:25 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:32.672 11:05:25 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:32.933 [ 00:04:32.933 "bdev_malloc_delete", 00:04:32.933 "bdev_malloc_create", 00:04:32.933 "bdev_null_resize", 00:04:32.933 "bdev_null_delete", 00:04:32.933 "bdev_null_create", 00:04:32.933 "bdev_nvme_cuse_unregister", 00:04:32.933 "bdev_nvme_cuse_register", 00:04:32.933 "bdev_opal_new_user", 00:04:32.933 "bdev_opal_set_lock_state", 00:04:32.933 "bdev_opal_delete", 00:04:32.933 "bdev_opal_get_info", 00:04:32.933 "bdev_opal_create", 00:04:32.933 "bdev_nvme_opal_revert", 00:04:32.933 "bdev_nvme_opal_init", 00:04:32.933 "bdev_nvme_send_cmd", 00:04:32.933 "bdev_nvme_set_keys", 00:04:32.933 "bdev_nvme_get_path_iostat", 00:04:32.933 "bdev_nvme_get_mdns_discovery_info", 00:04:32.933 "bdev_nvme_stop_mdns_discovery", 00:04:32.933 "bdev_nvme_start_mdns_discovery", 00:04:32.933 "bdev_nvme_set_multipath_policy", 00:04:32.933 "bdev_nvme_set_preferred_path", 00:04:32.933 "bdev_nvme_get_io_paths", 00:04:32.933 "bdev_nvme_remove_error_injection", 00:04:32.933 "bdev_nvme_add_error_injection", 00:04:32.933 "bdev_nvme_get_discovery_info", 00:04:32.933 "bdev_nvme_stop_discovery", 00:04:32.933 "bdev_nvme_start_discovery", 00:04:32.933 "bdev_nvme_get_controller_health_info", 00:04:32.933 "bdev_nvme_disable_controller", 00:04:32.933 "bdev_nvme_enable_controller", 00:04:32.933 "bdev_nvme_reset_controller", 00:04:32.933 "bdev_nvme_get_transport_statistics", 00:04:32.933 "bdev_nvme_apply_firmware", 00:04:32.933 "bdev_nvme_detach_controller", 00:04:32.933 "bdev_nvme_get_controllers", 00:04:32.933 "bdev_nvme_attach_controller", 00:04:32.933 "bdev_nvme_set_hotplug", 00:04:32.933 "bdev_nvme_set_options", 00:04:32.933 "bdev_passthru_delete", 00:04:32.933 "bdev_passthru_create", 00:04:32.933 "bdev_lvol_set_parent_bdev", 00:04:32.933 "bdev_lvol_set_parent", 00:04:32.933 "bdev_lvol_check_shallow_copy", 00:04:32.933 "bdev_lvol_start_shallow_copy", 00:04:32.933 "bdev_lvol_grow_lvstore", 00:04:32.933 "bdev_lvol_get_lvols", 00:04:32.933 "bdev_lvol_get_lvstores", 00:04:32.933 "bdev_lvol_delete", 00:04:32.933 "bdev_lvol_set_read_only", 00:04:32.933 "bdev_lvol_resize", 00:04:32.933 "bdev_lvol_decouple_parent", 00:04:32.933 "bdev_lvol_inflate", 00:04:32.933 "bdev_lvol_rename", 00:04:32.933 "bdev_lvol_clone_bdev", 00:04:32.933 "bdev_lvol_clone", 00:04:32.933 "bdev_lvol_snapshot", 00:04:32.933 "bdev_lvol_create", 00:04:32.933 "bdev_lvol_delete_lvstore", 00:04:32.933 "bdev_lvol_rename_lvstore", 00:04:32.933 "bdev_lvol_create_lvstore", 00:04:32.933 "bdev_raid_set_options", 00:04:32.933 "bdev_raid_remove_base_bdev", 00:04:32.933 "bdev_raid_add_base_bdev", 00:04:32.933 "bdev_raid_delete", 00:04:32.933 "bdev_raid_create", 00:04:32.933 "bdev_raid_get_bdevs", 00:04:32.933 "bdev_error_inject_error", 00:04:32.933 "bdev_error_delete", 00:04:32.933 "bdev_error_create", 00:04:32.933 "bdev_split_delete", 00:04:32.933 "bdev_split_create", 00:04:32.933 "bdev_delay_delete", 00:04:32.933 "bdev_delay_create", 00:04:32.933 "bdev_delay_update_latency", 00:04:32.933 "bdev_zone_block_delete", 00:04:32.933 "bdev_zone_block_create", 00:04:32.933 "blobfs_create", 00:04:32.933 "blobfs_detect", 00:04:32.933 "blobfs_set_cache_size", 00:04:32.933 "bdev_aio_delete", 00:04:32.933 "bdev_aio_rescan", 00:04:32.933 "bdev_aio_create", 00:04:32.933 "bdev_ftl_set_property", 00:04:32.933 "bdev_ftl_get_properties", 00:04:32.933 "bdev_ftl_get_stats", 00:04:32.933 "bdev_ftl_unmap", 00:04:32.933 "bdev_ftl_unload", 00:04:32.933 "bdev_ftl_delete", 00:04:32.933 "bdev_ftl_load", 00:04:32.933 "bdev_ftl_create", 00:04:32.933 "bdev_virtio_attach_controller", 00:04:32.933 "bdev_virtio_scsi_get_devices", 00:04:32.933 "bdev_virtio_detach_controller", 00:04:32.933 "bdev_virtio_blk_set_hotplug", 00:04:32.933 "bdev_iscsi_delete", 00:04:32.933 "bdev_iscsi_create", 00:04:32.933 "bdev_iscsi_set_options", 00:04:32.933 "accel_error_inject_error", 00:04:32.933 "ioat_scan_accel_module", 00:04:32.933 "dsa_scan_accel_module", 00:04:32.933 "iaa_scan_accel_module", 00:04:32.933 "vfu_virtio_create_fs_endpoint", 00:04:32.933 "vfu_virtio_create_scsi_endpoint", 00:04:32.933 "vfu_virtio_scsi_remove_target", 00:04:32.933 "vfu_virtio_scsi_add_target", 00:04:32.933 "vfu_virtio_create_blk_endpoint", 00:04:32.933 "vfu_virtio_delete_endpoint", 00:04:32.933 "keyring_file_remove_key", 00:04:32.933 "keyring_file_add_key", 00:04:32.933 "keyring_linux_set_options", 00:04:32.933 "fsdev_aio_delete", 00:04:32.933 "fsdev_aio_create", 00:04:32.933 "iscsi_get_histogram", 00:04:32.933 "iscsi_enable_histogram", 00:04:32.933 "iscsi_set_options", 00:04:32.933 "iscsi_get_auth_groups", 00:04:32.933 "iscsi_auth_group_remove_secret", 00:04:32.933 "iscsi_auth_group_add_secret", 00:04:32.933 "iscsi_delete_auth_group", 00:04:32.933 "iscsi_create_auth_group", 00:04:32.933 "iscsi_set_discovery_auth", 00:04:32.933 "iscsi_get_options", 00:04:32.933 "iscsi_target_node_request_logout", 00:04:32.933 "iscsi_target_node_set_redirect", 00:04:32.933 "iscsi_target_node_set_auth", 00:04:32.933 "iscsi_target_node_add_lun", 00:04:32.933 "iscsi_get_stats", 00:04:32.934 "iscsi_get_connections", 00:04:32.934 "iscsi_portal_group_set_auth", 00:04:32.934 "iscsi_start_portal_group", 00:04:32.934 "iscsi_delete_portal_group", 00:04:32.934 "iscsi_create_portal_group", 00:04:32.934 "iscsi_get_portal_groups", 00:04:32.934 "iscsi_delete_target_node", 00:04:32.934 "iscsi_target_node_remove_pg_ig_maps", 00:04:32.934 "iscsi_target_node_add_pg_ig_maps", 00:04:32.934 "iscsi_create_target_node", 00:04:32.934 "iscsi_get_target_nodes", 00:04:32.934 "iscsi_delete_initiator_group", 00:04:32.934 "iscsi_initiator_group_remove_initiators", 00:04:32.934 "iscsi_initiator_group_add_initiators", 00:04:32.934 "iscsi_create_initiator_group", 00:04:32.934 "iscsi_get_initiator_groups", 00:04:32.934 "nvmf_set_crdt", 00:04:32.934 "nvmf_set_config", 00:04:32.934 "nvmf_set_max_subsystems", 00:04:32.934 "nvmf_stop_mdns_prr", 00:04:32.934 "nvmf_publish_mdns_prr", 00:04:32.934 "nvmf_subsystem_get_listeners", 00:04:32.934 "nvmf_subsystem_get_qpairs", 00:04:32.934 "nvmf_subsystem_get_controllers", 00:04:32.934 "nvmf_get_stats", 00:04:32.934 "nvmf_get_transports", 00:04:32.934 "nvmf_create_transport", 00:04:32.934 "nvmf_get_targets", 00:04:32.934 "nvmf_delete_target", 00:04:32.934 "nvmf_create_target", 00:04:32.934 "nvmf_subsystem_allow_any_host", 00:04:32.934 "nvmf_subsystem_set_keys", 00:04:32.934 "nvmf_subsystem_remove_host", 00:04:32.934 "nvmf_subsystem_add_host", 00:04:32.934 "nvmf_ns_remove_host", 00:04:32.934 "nvmf_ns_add_host", 00:04:32.934 "nvmf_subsystem_remove_ns", 00:04:32.934 "nvmf_subsystem_set_ns_ana_group", 00:04:32.934 "nvmf_subsystem_add_ns", 00:04:32.934 "nvmf_subsystem_listener_set_ana_state", 00:04:32.934 "nvmf_discovery_get_referrals", 00:04:32.934 "nvmf_discovery_remove_referral", 00:04:32.934 "nvmf_discovery_add_referral", 00:04:32.934 "nvmf_subsystem_remove_listener", 00:04:32.934 "nvmf_subsystem_add_listener", 00:04:32.934 "nvmf_delete_subsystem", 00:04:32.934 "nvmf_create_subsystem", 00:04:32.934 "nvmf_get_subsystems", 00:04:32.934 "env_dpdk_get_mem_stats", 00:04:32.934 "nbd_get_disks", 00:04:32.934 "nbd_stop_disk", 00:04:32.934 "nbd_start_disk", 00:04:32.934 "ublk_recover_disk", 00:04:32.934 "ublk_get_disks", 00:04:32.934 "ublk_stop_disk", 00:04:32.934 "ublk_start_disk", 00:04:32.934 "ublk_destroy_target", 00:04:32.934 "ublk_create_target", 00:04:32.934 "virtio_blk_create_transport", 00:04:32.934 "virtio_blk_get_transports", 00:04:32.934 "vhost_controller_set_coalescing", 00:04:32.934 "vhost_get_controllers", 00:04:32.934 "vhost_delete_controller", 00:04:32.934 "vhost_create_blk_controller", 00:04:32.934 "vhost_scsi_controller_remove_target", 00:04:32.934 "vhost_scsi_controller_add_target", 00:04:32.934 "vhost_start_scsi_controller", 00:04:32.934 "vhost_create_scsi_controller", 00:04:32.934 "thread_set_cpumask", 00:04:32.934 "scheduler_set_options", 00:04:32.934 "framework_get_governor", 00:04:32.934 "framework_get_scheduler", 00:04:32.934 "framework_set_scheduler", 00:04:32.934 "framework_get_reactors", 00:04:32.934 "thread_get_io_channels", 00:04:32.934 "thread_get_pollers", 00:04:32.934 "thread_get_stats", 00:04:32.934 "framework_monitor_context_switch", 00:04:32.934 "spdk_kill_instance", 00:04:32.934 "log_enable_timestamps", 00:04:32.934 "log_get_flags", 00:04:32.934 "log_clear_flag", 00:04:32.934 "log_set_flag", 00:04:32.934 "log_get_level", 00:04:32.934 "log_set_level", 00:04:32.934 "log_get_print_level", 00:04:32.934 "log_set_print_level", 00:04:32.934 "framework_enable_cpumask_locks", 00:04:32.934 "framework_disable_cpumask_locks", 00:04:32.934 "framework_wait_init", 00:04:32.934 "framework_start_init", 00:04:32.934 "scsi_get_devices", 00:04:32.934 "bdev_get_histogram", 00:04:32.934 "bdev_enable_histogram", 00:04:32.934 "bdev_set_qos_limit", 00:04:32.934 "bdev_set_qd_sampling_period", 00:04:32.934 "bdev_get_bdevs", 00:04:32.934 "bdev_reset_iostat", 00:04:32.934 "bdev_get_iostat", 00:04:32.934 "bdev_examine", 00:04:32.934 "bdev_wait_for_examine", 00:04:32.934 "bdev_set_options", 00:04:32.934 "accel_get_stats", 00:04:32.934 "accel_set_options", 00:04:32.934 "accel_set_driver", 00:04:32.934 "accel_crypto_key_destroy", 00:04:32.934 "accel_crypto_keys_get", 00:04:32.934 "accel_crypto_key_create", 00:04:32.934 "accel_assign_opc", 00:04:32.934 "accel_get_module_info", 00:04:32.934 "accel_get_opc_assignments", 00:04:32.934 "vmd_rescan", 00:04:32.934 "vmd_remove_device", 00:04:32.934 "vmd_enable", 00:04:32.934 "sock_get_default_impl", 00:04:32.934 "sock_set_default_impl", 00:04:32.934 "sock_impl_set_options", 00:04:32.934 "sock_impl_get_options", 00:04:32.934 "iobuf_get_stats", 00:04:32.934 "iobuf_set_options", 00:04:32.934 "keyring_get_keys", 00:04:32.934 "vfu_tgt_set_base_path", 00:04:32.934 "framework_get_pci_devices", 00:04:32.934 "framework_get_config", 00:04:32.934 "framework_get_subsystems", 00:04:32.934 "fsdev_set_opts", 00:04:32.934 "fsdev_get_opts", 00:04:32.934 "trace_get_info", 00:04:32.934 "trace_get_tpoint_group_mask", 00:04:32.934 "trace_disable_tpoint_group", 00:04:32.934 "trace_enable_tpoint_group", 00:04:32.934 "trace_clear_tpoint_mask", 00:04:32.934 "trace_set_tpoint_mask", 00:04:32.934 "notify_get_notifications", 00:04:32.934 "notify_get_types", 00:04:32.934 "spdk_get_version", 00:04:32.934 "rpc_get_methods" 00:04:32.934 ] 00:04:32.934 11:05:25 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:32.934 11:05:25 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:32.934 11:05:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:32.934 11:05:25 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:32.934 11:05:25 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2498244 00:04:32.934 11:05:25 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2498244 ']' 00:04:32.934 11:05:25 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2498244 00:04:32.934 11:05:25 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:32.934 11:05:25 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:32.934 11:05:25 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2498244 00:04:32.934 11:05:25 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:32.934 11:05:25 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:32.934 11:05:25 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2498244' 00:04:32.934 killing process with pid 2498244 00:04:32.934 11:05:25 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2498244 00:04:32.934 11:05:25 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2498244 00:04:33.195 00:04:33.195 real 0m1.552s 00:04:33.195 user 0m2.853s 00:04:33.195 sys 0m0.461s 00:04:33.195 11:05:25 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.195 11:05:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:33.195 ************************************ 00:04:33.195 END TEST spdkcli_tcp 00:04:33.195 ************************************ 00:04:33.195 11:05:25 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:33.195 11:05:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.195 11:05:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.195 11:05:25 -- common/autotest_common.sh@10 -- # set +x 00:04:33.195 ************************************ 00:04:33.195 START TEST dpdk_mem_utility 00:04:33.195 ************************************ 00:04:33.195 11:05:25 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:33.456 * Looking for test storage... 00:04:33.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:33.456 11:05:25 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:33.456 11:05:25 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:33.456 11:05:25 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:33.456 11:05:26 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:33.456 11:05:26 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:33.456 11:05:26 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:33.456 11:05:26 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:33.456 11:05:26 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.456 11:05:26 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:33.456 11:05:26 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:33.456 11:05:26 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:33.456 11:05:26 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:33.456 11:05:26 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:33.456 11:05:26 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:33.456 11:05:26 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:33.456 11:05:26 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:33.456 11:05:26 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:33.456 11:05:26 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:33.456 11:05:26 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.456 11:05:26 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:33.456 11:05:26 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:33.456 11:05:26 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.456 11:05:26 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:33.456 11:05:26 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:33.456 11:05:26 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:33.456 11:05:26 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:33.456 11:05:26 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.456 11:05:26 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:33.456 11:05:26 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:33.456 11:05:26 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:33.456 11:05:26 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:33.456 11:05:26 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:33.456 11:05:26 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.456 11:05:26 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:33.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.456 --rc genhtml_branch_coverage=1 00:04:33.456 --rc genhtml_function_coverage=1 00:04:33.456 --rc genhtml_legend=1 00:04:33.456 --rc geninfo_all_blocks=1 00:04:33.456 --rc geninfo_unexecuted_blocks=1 00:04:33.456 00:04:33.456 ' 00:04:33.456 11:05:26 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:33.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.456 --rc genhtml_branch_coverage=1 00:04:33.456 --rc genhtml_function_coverage=1 00:04:33.456 --rc genhtml_legend=1 00:04:33.456 --rc geninfo_all_blocks=1 00:04:33.456 --rc geninfo_unexecuted_blocks=1 00:04:33.456 00:04:33.456 ' 00:04:33.456 11:05:26 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:33.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.456 --rc genhtml_branch_coverage=1 00:04:33.456 --rc genhtml_function_coverage=1 00:04:33.456 --rc genhtml_legend=1 00:04:33.456 --rc geninfo_all_blocks=1 00:04:33.456 --rc geninfo_unexecuted_blocks=1 00:04:33.456 00:04:33.456 ' 00:04:33.456 11:05:26 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:33.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.456 --rc genhtml_branch_coverage=1 00:04:33.456 --rc genhtml_function_coverage=1 00:04:33.456 --rc genhtml_legend=1 00:04:33.456 --rc geninfo_all_blocks=1 00:04:33.456 --rc geninfo_unexecuted_blocks=1 00:04:33.456 00:04:33.456 ' 00:04:33.456 11:05:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:33.456 11:05:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2498637 00:04:33.456 11:05:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2498637 00:04:33.456 11:05:26 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2498637 ']' 00:04:33.456 11:05:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:33.456 11:05:26 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.456 11:05:26 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.456 11:05:26 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.456 11:05:26 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.456 11:05:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:33.456 [2024-11-20 11:05:26.119486] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:04:33.456 [2024-11-20 11:05:26.119563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2498637 ] 00:04:33.717 [2024-11-20 11:05:26.208074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.717 [2024-11-20 11:05:26.243081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.288 11:05:26 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.288 11:05:26 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:34.288 11:05:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:34.288 11:05:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:34.288 11:05:26 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.288 11:05:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:34.288 { 00:04:34.288 "filename": "/tmp/spdk_mem_dump.txt" 00:04:34.288 } 00:04:34.288 11:05:26 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.288 11:05:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:34.288 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:34.288 1 heaps totaling size 810.000000 MiB 00:04:34.288 size: 810.000000 MiB heap id: 0 00:04:34.288 end heaps---------- 00:04:34.288 9 mempools totaling size 595.772034 MiB 00:04:34.288 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:34.288 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:34.288 size: 92.545471 MiB name: bdev_io_2498637 00:04:34.288 size: 50.003479 MiB name: msgpool_2498637 00:04:34.288 size: 36.509338 MiB name: fsdev_io_2498637 00:04:34.288 size: 21.763794 MiB name: PDU_Pool 00:04:34.288 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:34.288 size: 4.133484 MiB name: evtpool_2498637 00:04:34.288 size: 0.026123 MiB name: Session_Pool 00:04:34.288 end mempools------- 00:04:34.288 6 memzones totaling size 4.142822 MiB 00:04:34.288 size: 1.000366 MiB name: RG_ring_0_2498637 00:04:34.288 size: 1.000366 MiB name: RG_ring_1_2498637 00:04:34.288 size: 1.000366 MiB name: RG_ring_4_2498637 00:04:34.288 size: 1.000366 MiB name: RG_ring_5_2498637 00:04:34.288 size: 0.125366 MiB name: RG_ring_2_2498637 00:04:34.288 size: 0.015991 MiB name: RG_ring_3_2498637 00:04:34.288 end memzones------- 00:04:34.288 11:05:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:34.288 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:34.288 list of free elements. size: 10.862488 MiB 00:04:34.288 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:34.288 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:34.288 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:34.288 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:34.288 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:34.288 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:34.288 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:34.288 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:34.288 element at address: 0x20001a600000 with size: 0.582886 MiB 00:04:34.288 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:34.288 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:34.288 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:34.288 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:34.288 element at address: 0x200027a00000 with size: 0.410034 MiB 00:04:34.288 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:34.288 list of standard malloc elements. size: 199.218628 MiB 00:04:34.288 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:34.288 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:34.288 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:34.288 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:34.288 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:34.288 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:34.288 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:34.288 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:34.288 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:34.288 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:34.288 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:34.288 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:34.288 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:34.288 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:34.288 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:34.288 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:34.288 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:34.288 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:34.288 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:34.288 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:34.288 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:34.288 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:34.288 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:34.288 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:34.288 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:34.288 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:34.288 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:34.288 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:34.288 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:34.288 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:34.288 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:34.288 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:34.288 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:34.288 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:34.288 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:34.288 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:34.288 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:34.288 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:34.288 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:34.288 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:04:34.288 element at address: 0x200027a69040 with size: 0.000183 MiB 00:04:34.288 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:04:34.288 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:34.288 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:34.288 list of memzone associated elements. size: 599.918884 MiB 00:04:34.288 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:34.288 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:34.288 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:34.288 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:34.288 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:34.288 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2498637_0 00:04:34.288 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:34.288 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2498637_0 00:04:34.288 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:34.288 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2498637_0 00:04:34.288 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:34.288 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:34.288 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:34.288 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:34.288 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:34.288 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2498637_0 00:04:34.288 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:34.288 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2498637 00:04:34.289 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:34.289 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2498637 00:04:34.289 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:34.289 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:34.289 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:34.289 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:34.289 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:34.289 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:34.289 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:34.289 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:34.289 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:34.289 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2498637 00:04:34.289 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:34.289 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2498637 00:04:34.289 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:34.289 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2498637 00:04:34.289 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:34.289 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2498637 00:04:34.289 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:34.289 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2498637 00:04:34.289 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:34.289 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2498637 00:04:34.289 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:34.289 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:34.289 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:34.289 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:34.289 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:34.289 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:34.289 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:34.289 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2498637 00:04:34.289 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:34.289 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2498637 00:04:34.289 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:34.289 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:34.289 element at address: 0x200027a69100 with size: 0.023743 MiB 00:04:34.289 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:34.289 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:34.289 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2498637 00:04:34.289 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:04:34.289 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:34.289 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:34.289 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2498637 00:04:34.289 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:34.289 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2498637 00:04:34.289 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:34.289 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2498637 00:04:34.289 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:04:34.289 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:34.289 11:05:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:34.289 11:05:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2498637 00:04:34.289 11:05:26 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2498637 ']' 00:04:34.289 11:05:26 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2498637 00:04:34.289 11:05:27 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:34.289 11:05:27 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:34.289 11:05:27 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2498637 00:04:34.550 11:05:27 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:34.550 11:05:27 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:34.550 11:05:27 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2498637' 00:04:34.550 killing process with pid 2498637 00:04:34.550 11:05:27 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2498637 00:04:34.550 11:05:27 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2498637 00:04:34.550 00:04:34.550 real 0m1.395s 00:04:34.550 user 0m1.454s 00:04:34.550 sys 0m0.419s 00:04:34.550 11:05:27 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.550 11:05:27 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:34.550 ************************************ 00:04:34.550 END TEST dpdk_mem_utility 00:04:34.550 ************************************ 00:04:34.550 11:05:27 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:34.550 11:05:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.550 11:05:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.550 11:05:27 -- common/autotest_common.sh@10 -- # set +x 00:04:34.811 ************************************ 00:04:34.811 START TEST event 00:04:34.811 ************************************ 00:04:34.811 11:05:27 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:34.811 * Looking for test storage... 00:04:34.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:34.811 11:05:27 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:34.811 11:05:27 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:34.811 11:05:27 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:34.811 11:05:27 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:34.811 11:05:27 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.811 11:05:27 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.811 11:05:27 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.811 11:05:27 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.811 11:05:27 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.811 11:05:27 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.811 11:05:27 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.811 11:05:27 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.811 11:05:27 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.811 11:05:27 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.811 11:05:27 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.811 11:05:27 event -- scripts/common.sh@344 -- # case "$op" in 00:04:34.811 11:05:27 event -- scripts/common.sh@345 -- # : 1 00:04:34.811 11:05:27 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.811 11:05:27 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.811 11:05:27 event -- scripts/common.sh@365 -- # decimal 1 00:04:34.811 11:05:27 event -- scripts/common.sh@353 -- # local d=1 00:04:34.811 11:05:27 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.811 11:05:27 event -- scripts/common.sh@355 -- # echo 1 00:04:34.811 11:05:27 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.811 11:05:27 event -- scripts/common.sh@366 -- # decimal 2 00:04:34.811 11:05:27 event -- scripts/common.sh@353 -- # local d=2 00:04:34.811 11:05:27 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.811 11:05:27 event -- scripts/common.sh@355 -- # echo 2 00:04:34.811 11:05:27 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.811 11:05:27 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.811 11:05:27 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.811 11:05:27 event -- scripts/common.sh@368 -- # return 0 00:04:34.811 11:05:27 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.811 11:05:27 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:34.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.811 --rc genhtml_branch_coverage=1 00:04:34.811 --rc genhtml_function_coverage=1 00:04:34.811 --rc genhtml_legend=1 00:04:34.811 --rc geninfo_all_blocks=1 00:04:34.811 --rc geninfo_unexecuted_blocks=1 00:04:34.811 00:04:34.811 ' 00:04:34.811 11:05:27 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:34.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.811 --rc genhtml_branch_coverage=1 00:04:34.811 --rc genhtml_function_coverage=1 00:04:34.811 --rc genhtml_legend=1 00:04:34.811 --rc geninfo_all_blocks=1 00:04:34.811 --rc geninfo_unexecuted_blocks=1 00:04:34.811 00:04:34.811 ' 00:04:34.811 11:05:27 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:34.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.811 --rc genhtml_branch_coverage=1 00:04:34.811 --rc genhtml_function_coverage=1 00:04:34.811 --rc genhtml_legend=1 00:04:34.811 --rc geninfo_all_blocks=1 00:04:34.811 --rc geninfo_unexecuted_blocks=1 00:04:34.811 00:04:34.811 ' 00:04:34.811 11:05:27 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:34.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.811 --rc genhtml_branch_coverage=1 00:04:34.811 --rc genhtml_function_coverage=1 00:04:34.811 --rc genhtml_legend=1 00:04:34.811 --rc geninfo_all_blocks=1 00:04:34.811 --rc geninfo_unexecuted_blocks=1 00:04:34.811 00:04:34.811 ' 00:04:34.811 11:05:27 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:34.811 11:05:27 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:34.811 11:05:27 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:34.811 11:05:27 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:34.811 11:05:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.811 11:05:27 event -- common/autotest_common.sh@10 -- # set +x 00:04:35.071 ************************************ 00:04:35.071 START TEST event_perf 00:04:35.071 ************************************ 00:04:35.071 11:05:27 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:35.071 Running I/O for 1 seconds...[2024-11-20 11:05:27.586249] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:04:35.071 [2024-11-20 11:05:27.586363] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2498939 ] 00:04:35.071 [2024-11-20 11:05:27.676367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:35.071 [2024-11-20 11:05:27.721319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:35.071 [2024-11-20 11:05:27.721475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:35.071 [2024-11-20 11:05:27.721631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.071 Running I/O for 1 seconds...[2024-11-20 11:05:27.721631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:36.010 00:04:36.010 lcore 0: 179842 00:04:36.010 lcore 1: 179845 00:04:36.010 lcore 2: 179845 00:04:36.010 lcore 3: 179846 00:04:36.010 done. 00:04:36.010 00:04:36.010 real 0m1.185s 00:04:36.010 user 0m4.097s 00:04:36.010 sys 0m0.086s 00:04:36.010 11:05:28 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.010 11:05:28 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:36.010 ************************************ 00:04:36.010 END TEST event_perf 00:04:36.010 ************************************ 00:04:36.270 11:05:28 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:36.270 11:05:28 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:36.270 11:05:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.270 11:05:28 event -- common/autotest_common.sh@10 -- # set +x 00:04:36.270 ************************************ 00:04:36.270 START TEST event_reactor 00:04:36.270 ************************************ 00:04:36.270 11:05:28 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:36.270 [2024-11-20 11:05:28.846434] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:04:36.270 [2024-11-20 11:05:28.846512] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2499102 ] 00:04:36.270 [2024-11-20 11:05:28.936466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.270 [2024-11-20 11:05:28.974023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.654 test_start 00:04:37.654 oneshot 00:04:37.654 tick 100 00:04:37.654 tick 100 00:04:37.654 tick 250 00:04:37.654 tick 100 00:04:37.654 tick 100 00:04:37.654 tick 100 00:04:37.654 tick 250 00:04:37.654 tick 500 00:04:37.654 tick 100 00:04:37.654 tick 100 00:04:37.654 tick 250 00:04:37.654 tick 100 00:04:37.654 tick 100 00:04:37.654 test_end 00:04:37.654 00:04:37.654 real 0m1.174s 00:04:37.654 user 0m1.088s 00:04:37.654 sys 0m0.082s 00:04:37.654 11:05:29 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.654 11:05:29 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:37.654 ************************************ 00:04:37.654 END TEST event_reactor 00:04:37.654 ************************************ 00:04:37.654 11:05:30 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:37.654 11:05:30 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:37.654 11:05:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.654 11:05:30 event -- common/autotest_common.sh@10 -- # set +x 00:04:37.654 ************************************ 00:04:37.654 START TEST event_reactor_perf 00:04:37.654 ************************************ 00:04:37.654 11:05:30 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:37.654 [2024-11-20 11:05:30.098569] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:04:37.654 [2024-11-20 11:05:30.098666] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2499448 ] 00:04:37.654 [2024-11-20 11:05:30.186897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.654 [2024-11-20 11:05:30.225377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.595 test_start 00:04:38.595 test_end 00:04:38.595 Performance: 538294 events per second 00:04:38.595 00:04:38.595 real 0m1.175s 00:04:38.595 user 0m1.095s 00:04:38.595 sys 0m0.076s 00:04:38.595 11:05:31 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.595 11:05:31 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:38.595 ************************************ 00:04:38.595 END TEST event_reactor_perf 00:04:38.595 ************************************ 00:04:38.595 11:05:31 event -- event/event.sh@49 -- # uname -s 00:04:38.595 11:05:31 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:38.595 11:05:31 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:38.595 11:05:31 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.595 11:05:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.595 11:05:31 event -- common/autotest_common.sh@10 -- # set +x 00:04:38.595 ************************************ 00:04:38.595 START TEST event_scheduler 00:04:38.595 ************************************ 00:04:38.856 11:05:31 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:38.856 * Looking for test storage... 00:04:38.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:38.856 11:05:31 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:38.856 11:05:31 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:38.856 11:05:31 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:38.856 11:05:31 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:38.856 11:05:31 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.856 11:05:31 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.856 11:05:31 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.856 11:05:31 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.856 11:05:31 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.856 11:05:31 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.856 11:05:31 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.856 11:05:31 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.856 11:05:31 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.856 11:05:31 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.856 11:05:31 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.856 11:05:31 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:38.856 11:05:31 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:38.856 11:05:31 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.856 11:05:31 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.856 11:05:31 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:38.856 11:05:31 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:38.856 11:05:31 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.856 11:05:31 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:38.856 11:05:31 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.856 11:05:31 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:38.856 11:05:31 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:38.856 11:05:31 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.856 11:05:31 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:38.856 11:05:31 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.856 11:05:31 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.856 11:05:31 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.856 11:05:31 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:38.856 11:05:31 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.856 11:05:31 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:38.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.856 --rc genhtml_branch_coverage=1 00:04:38.856 --rc genhtml_function_coverage=1 00:04:38.856 --rc genhtml_legend=1 00:04:38.856 --rc geninfo_all_blocks=1 00:04:38.856 --rc geninfo_unexecuted_blocks=1 00:04:38.856 00:04:38.856 ' 00:04:38.856 11:05:31 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:38.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.856 --rc genhtml_branch_coverage=1 00:04:38.856 --rc genhtml_function_coverage=1 00:04:38.856 --rc genhtml_legend=1 00:04:38.856 --rc geninfo_all_blocks=1 00:04:38.856 --rc geninfo_unexecuted_blocks=1 00:04:38.856 00:04:38.856 ' 00:04:38.856 11:05:31 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:38.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.856 --rc genhtml_branch_coverage=1 00:04:38.856 --rc genhtml_function_coverage=1 00:04:38.856 --rc genhtml_legend=1 00:04:38.856 --rc geninfo_all_blocks=1 00:04:38.856 --rc geninfo_unexecuted_blocks=1 00:04:38.856 00:04:38.856 ' 00:04:38.856 11:05:31 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:38.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.856 --rc genhtml_branch_coverage=1 00:04:38.856 --rc genhtml_function_coverage=1 00:04:38.856 --rc genhtml_legend=1 00:04:38.856 --rc geninfo_all_blocks=1 00:04:38.856 --rc geninfo_unexecuted_blocks=1 00:04:38.856 00:04:38.856 ' 00:04:38.856 11:05:31 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:38.856 11:05:31 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2499833 00:04:38.856 11:05:31 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.856 11:05:31 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2499833 00:04:38.856 11:05:31 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:38.856 11:05:31 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2499833 ']' 00:04:38.856 11:05:31 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.856 11:05:31 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.856 11:05:31 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.856 11:05:31 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.856 11:05:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:38.856 [2024-11-20 11:05:31.594360] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:04:38.856 [2024-11-20 11:05:31.594417] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2499833 ] 00:04:39.117 [2024-11-20 11:05:31.684535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:39.117 [2024-11-20 11:05:31.730795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.117 [2024-11-20 11:05:31.730954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.117 [2024-11-20 11:05:31.731113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:39.117 [2024-11-20 11:05:31.731114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:39.687 11:05:32 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.687 11:05:32 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:39.687 11:05:32 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:39.687 11:05:32 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.687 11:05:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:39.687 [2024-11-20 11:05:32.397528] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:39.687 [2024-11-20 11:05:32.397547] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:39.687 [2024-11-20 11:05:32.397558] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:39.687 [2024-11-20 11:05:32.397564] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:39.687 [2024-11-20 11:05:32.397569] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:39.687 11:05:32 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.687 11:05:32 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:39.687 11:05:32 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.687 11:05:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:39.948 [2024-11-20 11:05:32.464258] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:39.948 11:05:32 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.948 11:05:32 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:39.948 11:05:32 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.948 11:05:32 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.948 11:05:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:39.948 ************************************ 00:04:39.948 START TEST scheduler_create_thread 00:04:39.948 ************************************ 00:04:39.948 11:05:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:39.949 11:05:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:39.949 11:05:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.949 11:05:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.949 2 00:04:39.949 11:05:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.949 11:05:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:39.949 11:05:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.949 11:05:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.949 3 00:04:39.949 11:05:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.949 11:05:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:39.949 11:05:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.949 11:05:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.949 4 00:04:39.949 11:05:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.949 11:05:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:39.949 11:05:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.949 11:05:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.949 5 00:04:39.949 11:05:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.949 11:05:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:39.949 11:05:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.949 11:05:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.949 6 00:04:39.949 11:05:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.949 11:05:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:39.949 11:05:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.949 11:05:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.949 7 00:04:39.949 11:05:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.949 11:05:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:39.949 11:05:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.949 11:05:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.949 8 00:04:39.949 11:05:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.949 11:05:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:39.949 11:05:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.949 11:05:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.949 9 00:04:39.949 11:05:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.949 11:05:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:39.949 11:05:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.949 11:05:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.520 10 00:04:40.520 11:05:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.520 11:05:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:40.520 11:05:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.520 11:05:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:41.903 11:05:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.903 11:05:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:41.903 11:05:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:41.903 11:05:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.903 11:05:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.474 11:05:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.474 11:05:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:42.474 11:05:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.474 11:05:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.416 11:05:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.416 11:05:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:43.416 11:05:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:43.416 11:05:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.416 11:05:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.380 11:05:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.380 00:04:44.380 real 0m4.224s 00:04:44.380 user 0m0.027s 00:04:44.380 sys 0m0.005s 00:04:44.380 11:05:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.380 11:05:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.380 ************************************ 00:04:44.380 END TEST scheduler_create_thread 00:04:44.380 ************************************ 00:04:44.380 11:05:36 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:44.380 11:05:36 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2499833 00:04:44.380 11:05:36 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2499833 ']' 00:04:44.380 11:05:36 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2499833 00:04:44.380 11:05:36 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:44.380 11:05:36 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:44.380 11:05:36 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2499833 00:04:44.380 11:05:36 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:44.380 11:05:36 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:44.380 11:05:36 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2499833' 00:04:44.380 killing process with pid 2499833 00:04:44.380 11:05:36 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2499833 00:04:44.380 11:05:36 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2499833 00:04:44.380 [2024-11-20 11:05:37.005914] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:44.688 00:04:44.688 real 0m5.828s 00:04:44.688 user 0m12.875s 00:04:44.688 sys 0m0.410s 00:04:44.688 11:05:37 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.688 11:05:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.688 ************************************ 00:04:44.688 END TEST event_scheduler 00:04:44.688 ************************************ 00:04:44.688 11:05:37 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:44.688 11:05:37 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:44.688 11:05:37 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.688 11:05:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.688 11:05:37 event -- common/autotest_common.sh@10 -- # set +x 00:04:44.688 ************************************ 00:04:44.688 START TEST app_repeat 00:04:44.688 ************************************ 00:04:44.688 11:05:37 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:44.688 11:05:37 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.688 11:05:37 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.688 11:05:37 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:44.688 11:05:37 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:44.688 11:05:37 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:44.688 11:05:37 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:44.688 11:05:37 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:44.688 11:05:37 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2500917 00:04:44.688 11:05:37 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.688 11:05:37 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:44.688 11:05:37 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2500917' 00:04:44.688 Process app_repeat pid: 2500917 00:04:44.688 11:05:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:44.688 11:05:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:44.688 spdk_app_start Round 0 00:04:44.688 11:05:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2500917 /var/tmp/spdk-nbd.sock 00:04:44.688 11:05:37 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2500917 ']' 00:04:44.688 11:05:37 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:44.688 11:05:37 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.688 11:05:37 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:44.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:44.688 11:05:37 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.688 11:05:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:44.688 [2024-11-20 11:05:37.286432] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:04:44.688 [2024-11-20 11:05:37.286499] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2500917 ] 00:04:44.688 [2024-11-20 11:05:37.376719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:44.688 [2024-11-20 11:05:37.409721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.688 [2024-11-20 11:05:37.409722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.976 11:05:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.976 11:05:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:44.976 11:05:37 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:44.976 Malloc0 00:04:44.976 11:05:37 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:45.241 Malloc1 00:04:45.241 11:05:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:45.241 11:05:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.241 11:05:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:45.241 11:05:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:45.241 11:05:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.241 11:05:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:45.241 11:05:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:45.241 11:05:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.241 11:05:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:45.241 11:05:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:45.241 11:05:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.241 11:05:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:45.241 11:05:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:45.241 11:05:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:45.241 11:05:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.241 11:05:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:45.502 /dev/nbd0 00:04:45.502 11:05:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:45.502 11:05:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:45.502 11:05:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:45.502 11:05:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:45.502 11:05:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:45.502 11:05:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:45.502 11:05:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:45.502 11:05:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:45.502 11:05:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:45.502 11:05:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:45.502 11:05:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:45.502 1+0 records in 00:04:45.502 1+0 records out 00:04:45.502 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274724 s, 14.9 MB/s 00:04:45.502 11:05:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.502 11:05:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:45.502 11:05:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.502 11:05:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:45.502 11:05:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:45.502 11:05:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:45.502 11:05:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.502 11:05:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:45.763 /dev/nbd1 00:04:45.763 11:05:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:45.763 11:05:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:45.763 11:05:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:45.763 11:05:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:45.763 11:05:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:45.763 11:05:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:45.763 11:05:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:45.763 11:05:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:45.763 11:05:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:45.763 11:05:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:45.763 11:05:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:45.763 1+0 records in 00:04:45.763 1+0 records out 00:04:45.763 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236583 s, 17.3 MB/s 00:04:45.763 11:05:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.763 11:05:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:45.763 11:05:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.763 11:05:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:45.763 11:05:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:45.763 11:05:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:45.763 11:05:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.763 11:05:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:45.763 11:05:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.764 11:05:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:45.764 11:05:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:45.764 { 00:04:45.764 "nbd_device": "/dev/nbd0", 00:04:45.764 "bdev_name": "Malloc0" 00:04:45.764 }, 00:04:45.764 { 00:04:45.764 "nbd_device": "/dev/nbd1", 00:04:45.764 "bdev_name": "Malloc1" 00:04:45.764 } 00:04:45.764 ]' 00:04:45.764 11:05:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:45.764 { 00:04:45.764 "nbd_device": "/dev/nbd0", 00:04:45.764 "bdev_name": "Malloc0" 00:04:45.764 }, 00:04:45.764 { 00:04:45.764 "nbd_device": "/dev/nbd1", 00:04:45.764 "bdev_name": "Malloc1" 00:04:45.764 } 00:04:45.764 ]' 00:04:45.764 11:05:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:46.024 11:05:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:46.025 /dev/nbd1' 00:04:46.025 11:05:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:46.025 /dev/nbd1' 00:04:46.025 11:05:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:46.025 11:05:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:46.025 11:05:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:46.025 11:05:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:46.025 11:05:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:46.025 11:05:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:46.025 11:05:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.025 11:05:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:46.025 11:05:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:46.025 11:05:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:46.025 11:05:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:46.025 11:05:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:46.025 256+0 records in 00:04:46.025 256+0 records out 00:04:46.025 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127081 s, 82.5 MB/s 00:04:46.025 11:05:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:46.025 11:05:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:46.025 256+0 records in 00:04:46.025 256+0 records out 00:04:46.025 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01198 s, 87.5 MB/s 00:04:46.025 11:05:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:46.025 11:05:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:46.025 256+0 records in 00:04:46.025 256+0 records out 00:04:46.025 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131571 s, 79.7 MB/s 00:04:46.025 11:05:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:46.025 11:05:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.025 11:05:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:46.025 11:05:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:46.025 11:05:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:46.025 11:05:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:46.025 11:05:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:46.025 11:05:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:46.025 11:05:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:46.025 11:05:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:46.025 11:05:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:46.025 11:05:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:46.025 11:05:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:46.025 11:05:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.025 11:05:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.025 11:05:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:46.025 11:05:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:46.025 11:05:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:46.025 11:05:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:46.286 11:05:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:46.286 11:05:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:46.286 11:05:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:46.286 11:05:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:46.286 11:05:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:46.286 11:05:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:46.286 11:05:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:46.286 11:05:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:46.286 11:05:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:46.286 11:05:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:46.286 11:05:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:46.286 11:05:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:46.286 11:05:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:46.286 11:05:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:46.286 11:05:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:46.286 11:05:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:46.286 11:05:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:46.286 11:05:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:46.286 11:05:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:46.286 11:05:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.286 11:05:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:46.554 11:05:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:46.554 11:05:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:46.554 11:05:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:46.554 11:05:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:46.554 11:05:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:46.554 11:05:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:46.554 11:05:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:46.554 11:05:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:46.554 11:05:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:46.554 11:05:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:46.554 11:05:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:46.554 11:05:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:46.554 11:05:39 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:46.814 11:05:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:46.814 [2024-11-20 11:05:39.506677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:46.814 [2024-11-20 11:05:39.535848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.814 [2024-11-20 11:05:39.535849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.075 [2024-11-20 11:05:39.564881] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:47.075 [2024-11-20 11:05:39.564914] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:50.375 11:05:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:50.375 11:05:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:50.375 spdk_app_start Round 1 00:04:50.375 11:05:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2500917 /var/tmp/spdk-nbd.sock 00:04:50.375 11:05:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2500917 ']' 00:04:50.375 11:05:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:50.375 11:05:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.375 11:05:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:50.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:50.375 11:05:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.375 11:05:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:50.375 11:05:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.375 11:05:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:50.375 11:05:42 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:50.375 Malloc0 00:04:50.375 11:05:42 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:50.375 Malloc1 00:04:50.375 11:05:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:50.375 11:05:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.375 11:05:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:50.375 11:05:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:50.375 11:05:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.375 11:05:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:50.375 11:05:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:50.375 11:05:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.375 11:05:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:50.375 11:05:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:50.375 11:05:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.375 11:05:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:50.375 11:05:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:50.375 11:05:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:50.375 11:05:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.375 11:05:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:50.636 /dev/nbd0 00:04:50.636 11:05:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:50.636 11:05:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:50.636 11:05:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:50.636 11:05:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:50.636 11:05:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:50.636 11:05:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:50.636 11:05:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:50.636 11:05:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:50.636 11:05:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:50.636 11:05:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:50.636 11:05:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:50.636 1+0 records in 00:04:50.636 1+0 records out 00:04:50.636 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277198 s, 14.8 MB/s 00:04:50.636 11:05:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:50.636 11:05:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:50.636 11:05:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:50.636 11:05:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:50.636 11:05:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:50.636 11:05:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:50.636 11:05:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.636 11:05:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:50.897 /dev/nbd1 00:04:50.897 11:05:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:50.897 11:05:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:50.897 11:05:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:50.897 11:05:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:50.897 11:05:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:50.897 11:05:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:50.897 11:05:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:50.897 11:05:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:50.897 11:05:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:50.897 11:05:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:50.897 11:05:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:50.897 1+0 records in 00:04:50.897 1+0 records out 00:04:50.897 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274946 s, 14.9 MB/s 00:04:50.897 11:05:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:50.897 11:05:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:50.897 11:05:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:50.897 11:05:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:50.897 11:05:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:50.897 11:05:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:50.897 11:05:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.897 11:05:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:50.897 11:05:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.897 11:05:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:51.158 { 00:04:51.158 "nbd_device": "/dev/nbd0", 00:04:51.158 "bdev_name": "Malloc0" 00:04:51.158 }, 00:04:51.158 { 00:04:51.158 "nbd_device": "/dev/nbd1", 00:04:51.158 "bdev_name": "Malloc1" 00:04:51.158 } 00:04:51.158 ]' 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:51.158 { 00:04:51.158 "nbd_device": "/dev/nbd0", 00:04:51.158 "bdev_name": "Malloc0" 00:04:51.158 }, 00:04:51.158 { 00:04:51.158 "nbd_device": "/dev/nbd1", 00:04:51.158 "bdev_name": "Malloc1" 00:04:51.158 } 00:04:51.158 ]' 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:51.158 /dev/nbd1' 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:51.158 /dev/nbd1' 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:51.158 256+0 records in 00:04:51.158 256+0 records out 00:04:51.158 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012295 s, 85.3 MB/s 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:51.158 256+0 records in 00:04:51.158 256+0 records out 00:04:51.158 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120022 s, 87.4 MB/s 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:51.158 256+0 records in 00:04:51.158 256+0 records out 00:04:51.158 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131407 s, 79.8 MB/s 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:51.158 11:05:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:51.419 11:05:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:51.419 11:05:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:51.419 11:05:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:51.419 11:05:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:51.419 11:05:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:51.419 11:05:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:51.419 11:05:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:51.419 11:05:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:51.419 11:05:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:51.419 11:05:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:51.419 11:05:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:51.419 11:05:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:51.419 11:05:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:51.419 11:05:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:51.419 11:05:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:51.419 11:05:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:51.419 11:05:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:51.419 11:05:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:51.419 11:05:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:51.419 11:05:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.419 11:05:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:51.680 11:05:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:51.680 11:05:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:51.680 11:05:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:51.680 11:05:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:51.680 11:05:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:51.680 11:05:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:51.680 11:05:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:51.680 11:05:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:51.680 11:05:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:51.680 11:05:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:51.680 11:05:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:51.680 11:05:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:51.680 11:05:44 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:51.941 11:05:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:51.941 [2024-11-20 11:05:44.656889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:52.202 [2024-11-20 11:05:44.686552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.202 [2024-11-20 11:05:44.686552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.202 [2024-11-20 11:05:44.716082] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:52.202 [2024-11-20 11:05:44.716113] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:55.504 11:05:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:55.504 11:05:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:55.504 spdk_app_start Round 2 00:04:55.504 11:05:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2500917 /var/tmp/spdk-nbd.sock 00:04:55.504 11:05:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2500917 ']' 00:04:55.504 11:05:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:55.504 11:05:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.504 11:05:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:55.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:55.504 11:05:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.504 11:05:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:55.504 11:05:47 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.504 11:05:47 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:55.504 11:05:47 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:55.504 Malloc0 00:04:55.504 11:05:47 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:55.504 Malloc1 00:04:55.504 11:05:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:55.504 11:05:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.504 11:05:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:55.504 11:05:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:55.504 11:05:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.504 11:05:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:55.504 11:05:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:55.504 11:05:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.504 11:05:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:55.504 11:05:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:55.504 11:05:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.504 11:05:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:55.504 11:05:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:55.504 11:05:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:55.504 11:05:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.504 11:05:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:55.504 /dev/nbd0 00:04:55.765 11:05:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:55.765 11:05:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:55.765 11:05:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:55.765 11:05:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:55.765 11:05:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:55.765 11:05:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:55.765 11:05:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:55.765 11:05:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:55.765 11:05:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:55.765 11:05:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:55.765 11:05:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:55.765 1+0 records in 00:04:55.765 1+0 records out 00:04:55.765 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272556 s, 15.0 MB/s 00:04:55.765 11:05:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:55.765 11:05:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:55.765 11:05:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:55.765 11:05:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:55.765 11:05:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:55.765 11:05:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:55.765 11:05:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.765 11:05:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:55.765 /dev/nbd1 00:04:55.765 11:05:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:55.765 11:05:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:55.765 11:05:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:55.765 11:05:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:55.765 11:05:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:55.765 11:05:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:55.765 11:05:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:55.765 11:05:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:55.765 11:05:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:55.765 11:05:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:55.765 11:05:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:55.765 1+0 records in 00:04:55.765 1+0 records out 00:04:55.765 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271361 s, 15.1 MB/s 00:04:56.026 11:05:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:56.026 11:05:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:56.026 11:05:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:56.026 11:05:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:56.026 11:05:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:56.026 11:05:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:56.026 11:05:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.026 11:05:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:56.026 11:05:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.026 11:05:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:56.026 11:05:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:56.026 { 00:04:56.026 "nbd_device": "/dev/nbd0", 00:04:56.026 "bdev_name": "Malloc0" 00:04:56.026 }, 00:04:56.026 { 00:04:56.026 "nbd_device": "/dev/nbd1", 00:04:56.026 "bdev_name": "Malloc1" 00:04:56.026 } 00:04:56.026 ]' 00:04:56.026 11:05:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:56.026 { 00:04:56.026 "nbd_device": "/dev/nbd0", 00:04:56.026 "bdev_name": "Malloc0" 00:04:56.026 }, 00:04:56.026 { 00:04:56.026 "nbd_device": "/dev/nbd1", 00:04:56.026 "bdev_name": "Malloc1" 00:04:56.026 } 00:04:56.026 ]' 00:04:56.026 11:05:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:56.026 11:05:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:56.026 /dev/nbd1' 00:04:56.026 11:05:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:56.026 11:05:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:56.026 /dev/nbd1' 00:04:56.026 11:05:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:56.026 11:05:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:56.026 11:05:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:56.026 11:05:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:56.026 11:05:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:56.026 11:05:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.026 11:05:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:56.026 11:05:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:56.026 11:05:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:56.026 11:05:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:56.026 11:05:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:56.287 256+0 records in 00:04:56.287 256+0 records out 00:04:56.287 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127367 s, 82.3 MB/s 00:04:56.287 11:05:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:56.287 11:05:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:56.287 256+0 records in 00:04:56.287 256+0 records out 00:04:56.287 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123536 s, 84.9 MB/s 00:04:56.287 11:05:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:56.287 11:05:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:56.287 256+0 records in 00:04:56.287 256+0 records out 00:04:56.287 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130989 s, 80.1 MB/s 00:04:56.287 11:05:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:56.287 11:05:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.287 11:05:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:56.287 11:05:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:56.287 11:05:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:56.287 11:05:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:56.287 11:05:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:56.287 11:05:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:56.287 11:05:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:56.287 11:05:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:56.287 11:05:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:56.287 11:05:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:56.287 11:05:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:56.287 11:05:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.287 11:05:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.287 11:05:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:56.287 11:05:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:56.287 11:05:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:56.287 11:05:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:56.287 11:05:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:56.287 11:05:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:56.287 11:05:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:56.287 11:05:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:56.287 11:05:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:56.287 11:05:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:56.287 11:05:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:56.287 11:05:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:56.287 11:05:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:56.287 11:05:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:56.548 11:05:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:56.548 11:05:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:56.548 11:05:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:56.548 11:05:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:56.548 11:05:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:56.548 11:05:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:56.548 11:05:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:56.548 11:05:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:56.548 11:05:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:56.548 11:05:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.548 11:05:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:56.809 11:05:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:56.809 11:05:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:56.809 11:05:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:56.809 11:05:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:56.809 11:05:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:56.809 11:05:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:56.809 11:05:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:56.809 11:05:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:56.809 11:05:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:56.809 11:05:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:56.809 11:05:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:56.809 11:05:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:56.809 11:05:49 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:57.071 11:05:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:57.071 [2024-11-20 11:05:49.715557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:57.071 [2024-11-20 11:05:49.745525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.071 [2024-11-20 11:05:49.745526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.071 [2024-11-20 11:05:49.774524] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:57.071 [2024-11-20 11:05:49.774555] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:00.375 11:05:52 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2500917 /var/tmp/spdk-nbd.sock 00:05:00.375 11:05:52 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2500917 ']' 00:05:00.375 11:05:52 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:00.375 11:05:52 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.375 11:05:52 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:00.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:00.375 11:05:52 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.375 11:05:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:00.375 11:05:52 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.375 11:05:52 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:00.375 11:05:52 event.app_repeat -- event/event.sh@39 -- # killprocess 2500917 00:05:00.375 11:05:52 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2500917 ']' 00:05:00.375 11:05:52 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2500917 00:05:00.375 11:05:52 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:00.375 11:05:52 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.375 11:05:52 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2500917 00:05:00.375 11:05:52 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.375 11:05:52 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.375 11:05:52 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2500917' 00:05:00.375 killing process with pid 2500917 00:05:00.375 11:05:52 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2500917 00:05:00.375 11:05:52 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2500917 00:05:00.375 spdk_app_start is called in Round 0. 00:05:00.375 Shutdown signal received, stop current app iteration 00:05:00.375 Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 reinitialization... 00:05:00.375 spdk_app_start is called in Round 1. 00:05:00.375 Shutdown signal received, stop current app iteration 00:05:00.375 Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 reinitialization... 00:05:00.375 spdk_app_start is called in Round 2. 00:05:00.375 Shutdown signal received, stop current app iteration 00:05:00.375 Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 reinitialization... 00:05:00.375 spdk_app_start is called in Round 3. 00:05:00.375 Shutdown signal received, stop current app iteration 00:05:00.375 11:05:52 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:00.375 11:05:52 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:00.375 00:05:00.375 real 0m15.729s 00:05:00.375 user 0m34.417s 00:05:00.375 sys 0m2.340s 00:05:00.375 11:05:52 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.375 11:05:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:00.375 ************************************ 00:05:00.375 END TEST app_repeat 00:05:00.375 ************************************ 00:05:00.375 11:05:53 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:00.375 11:05:53 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:00.375 11:05:53 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.375 11:05:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.375 11:05:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.375 ************************************ 00:05:00.375 START TEST cpu_locks 00:05:00.375 ************************************ 00:05:00.375 11:05:53 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:00.637 * Looking for test storage... 00:05:00.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:00.637 11:05:53 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:00.637 11:05:53 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:00.637 11:05:53 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:00.637 11:05:53 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:00.637 11:05:53 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.637 11:05:53 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.637 11:05:53 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.637 11:05:53 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.637 11:05:53 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.637 11:05:53 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.637 11:05:53 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.637 11:05:53 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.637 11:05:53 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.637 11:05:53 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.637 11:05:53 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.637 11:05:53 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:00.637 11:05:53 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:00.637 11:05:53 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.637 11:05:53 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.637 11:05:53 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:00.637 11:05:53 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:00.637 11:05:53 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.637 11:05:53 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:00.637 11:05:53 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.637 11:05:53 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:00.637 11:05:53 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:00.637 11:05:53 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.637 11:05:53 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:00.637 11:05:53 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.637 11:05:53 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.637 11:05:53 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.637 11:05:53 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:00.637 11:05:53 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.637 11:05:53 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:00.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.637 --rc genhtml_branch_coverage=1 00:05:00.637 --rc genhtml_function_coverage=1 00:05:00.637 --rc genhtml_legend=1 00:05:00.637 --rc geninfo_all_blocks=1 00:05:00.637 --rc geninfo_unexecuted_blocks=1 00:05:00.637 00:05:00.637 ' 00:05:00.637 11:05:53 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:00.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.637 --rc genhtml_branch_coverage=1 00:05:00.637 --rc genhtml_function_coverage=1 00:05:00.637 --rc genhtml_legend=1 00:05:00.637 --rc geninfo_all_blocks=1 00:05:00.637 --rc geninfo_unexecuted_blocks=1 00:05:00.637 00:05:00.637 ' 00:05:00.637 11:05:53 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:00.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.637 --rc genhtml_branch_coverage=1 00:05:00.637 --rc genhtml_function_coverage=1 00:05:00.637 --rc genhtml_legend=1 00:05:00.637 --rc geninfo_all_blocks=1 00:05:00.637 --rc geninfo_unexecuted_blocks=1 00:05:00.637 00:05:00.637 ' 00:05:00.637 11:05:53 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:00.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.637 --rc genhtml_branch_coverage=1 00:05:00.637 --rc genhtml_function_coverage=1 00:05:00.637 --rc genhtml_legend=1 00:05:00.637 --rc geninfo_all_blocks=1 00:05:00.637 --rc geninfo_unexecuted_blocks=1 00:05:00.637 00:05:00.637 ' 00:05:00.637 11:05:53 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:00.637 11:05:53 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:00.637 11:05:53 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:00.637 11:05:53 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:00.637 11:05:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.637 11:05:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.637 11:05:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.637 ************************************ 00:05:00.637 START TEST default_locks 00:05:00.637 ************************************ 00:05:00.637 11:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:00.637 11:05:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2504498 00:05:00.637 11:05:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2504498 00:05:00.637 11:05:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:00.637 11:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2504498 ']' 00:05:00.637 11:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.637 11:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.637 11:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.637 11:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.637 11:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.637 [2024-11-20 11:05:53.359957] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:05:00.637 [2024-11-20 11:05:53.360020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2504498 ] 00:05:00.898 [2024-11-20 11:05:53.447660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.898 [2024-11-20 11:05:53.482520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.468 11:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.468 11:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:01.468 11:05:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2504498 00:05:01.468 11:05:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2504498 00:05:01.468 11:05:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:02.040 lslocks: write error 00:05:02.040 11:05:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2504498 00:05:02.040 11:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2504498 ']' 00:05:02.040 11:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2504498 00:05:02.040 11:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:02.040 11:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.040 11:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2504498 00:05:02.040 11:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.040 11:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.040 11:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2504498' 00:05:02.040 killing process with pid 2504498 00:05:02.040 11:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2504498 00:05:02.040 11:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2504498 00:05:02.301 11:05:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2504498 00:05:02.301 11:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:02.301 11:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2504498 00:05:02.301 11:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:02.301 11:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.301 11:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:02.302 11:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.302 11:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2504498 00:05:02.302 11:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2504498 ']' 00:05:02.302 11:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.302 11:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.302 11:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.302 11:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.302 11:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2504498) - No such process 00:05:02.302 ERROR: process (pid: 2504498) is no longer running 00:05:02.302 11:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.302 11:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:02.302 11:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:02.302 11:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:02.302 11:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:02.302 11:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:02.302 11:05:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:02.302 11:05:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:02.302 11:05:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:02.302 11:05:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:02.302 00:05:02.302 real 0m1.582s 00:05:02.302 user 0m1.690s 00:05:02.302 sys 0m0.557s 00:05:02.302 11:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.302 11:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.302 ************************************ 00:05:02.302 END TEST default_locks 00:05:02.302 ************************************ 00:05:02.302 11:05:54 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:02.302 11:05:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.302 11:05:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.302 11:05:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.302 ************************************ 00:05:02.302 START TEST default_locks_via_rpc 00:05:02.302 ************************************ 00:05:02.302 11:05:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:02.302 11:05:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2504858 00:05:02.302 11:05:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2504858 00:05:02.302 11:05:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:02.302 11:05:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2504858 ']' 00:05:02.302 11:05:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.302 11:05:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.302 11:05:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.302 11:05:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.302 11:05:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.302 [2024-11-20 11:05:55.008612] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:05:02.302 [2024-11-20 11:05:55.008663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2504858 ] 00:05:02.563 [2024-11-20 11:05:55.090613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.563 [2024-11-20 11:05:55.122815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.134 11:05:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.134 11:05:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:03.134 11:05:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:03.134 11:05:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.134 11:05:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.134 11:05:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.134 11:05:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:03.134 11:05:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:03.134 11:05:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:03.134 11:05:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:03.134 11:05:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:03.134 11:05:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.134 11:05:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.134 11:05:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.134 11:05:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2504858 00:05:03.134 11:05:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2504858 00:05:03.134 11:05:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:03.706 11:05:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2504858 00:05:03.706 11:05:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2504858 ']' 00:05:03.706 11:05:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2504858 00:05:03.706 11:05:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:03.706 11:05:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.706 11:05:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2504858 00:05:03.706 11:05:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.706 11:05:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.706 11:05:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2504858' 00:05:03.706 killing process with pid 2504858 00:05:03.706 11:05:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2504858 00:05:03.706 11:05:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2504858 00:05:03.969 00:05:03.969 real 0m1.546s 00:05:03.969 user 0m1.666s 00:05:03.969 sys 0m0.522s 00:05:03.969 11:05:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.969 11:05:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.969 ************************************ 00:05:03.969 END TEST default_locks_via_rpc 00:05:03.969 ************************************ 00:05:03.969 11:05:56 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:03.969 11:05:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.969 11:05:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.969 11:05:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.969 ************************************ 00:05:03.969 START TEST non_locking_app_on_locked_coremask 00:05:03.969 ************************************ 00:05:03.969 11:05:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:03.969 11:05:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2505227 00:05:03.969 11:05:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2505227 /var/tmp/spdk.sock 00:05:03.969 11:05:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:03.969 11:05:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2505227 ']' 00:05:03.969 11:05:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.969 11:05:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.969 11:05:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.969 11:05:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.969 11:05:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:03.969 [2024-11-20 11:05:56.631381] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:05:03.969 [2024-11-20 11:05:56.631431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2505227 ] 00:05:04.230 [2024-11-20 11:05:56.715931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.230 [2024-11-20 11:05:56.746101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.802 11:05:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.802 11:05:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:04.802 11:05:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:04.802 11:05:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2505243 00:05:04.802 11:05:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2505243 /var/tmp/spdk2.sock 00:05:04.802 11:05:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2505243 ']' 00:05:04.802 11:05:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:04.802 11:05:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.802 11:05:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:04.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:04.802 11:05:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.802 11:05:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.802 [2024-11-20 11:05:57.450666] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:05:04.802 [2024-11-20 11:05:57.450715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2505243 ] 00:05:04.802 [2024-11-20 11:05:57.538046] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:04.802 [2024-11-20 11:05:57.538067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.063 [2024-11-20 11:05:57.596446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.634 11:05:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.634 11:05:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:05.634 11:05:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2505227 00:05:05.634 11:05:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:05.634 11:05:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2505227 00:05:06.206 lslocks: write error 00:05:06.206 11:05:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2505227 00:05:06.206 11:05:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2505227 ']' 00:05:06.206 11:05:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2505227 00:05:06.206 11:05:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:06.206 11:05:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:06.206 11:05:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2505227 00:05:06.206 11:05:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:06.206 11:05:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:06.206 11:05:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2505227' 00:05:06.206 killing process with pid 2505227 00:05:06.206 11:05:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2505227 00:05:06.206 11:05:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2505227 00:05:06.778 11:05:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2505243 00:05:06.778 11:05:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2505243 ']' 00:05:06.778 11:05:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2505243 00:05:06.778 11:05:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:06.778 11:05:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:06.778 11:05:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2505243 00:05:06.778 11:05:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:06.778 11:05:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:06.778 11:05:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2505243' 00:05:06.778 killing process with pid 2505243 00:05:06.778 11:05:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2505243 00:05:06.778 11:05:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2505243 00:05:06.778 00:05:06.778 real 0m2.893s 00:05:06.778 user 0m3.217s 00:05:06.778 sys 0m0.887s 00:05:06.778 11:05:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.778 11:05:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:06.778 ************************************ 00:05:06.778 END TEST non_locking_app_on_locked_coremask 00:05:06.778 ************************************ 00:05:06.778 11:05:59 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:06.778 11:05:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.778 11:05:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.778 11:05:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.039 ************************************ 00:05:07.039 START TEST locking_app_on_unlocked_coremask 00:05:07.039 ************************************ 00:05:07.039 11:05:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:07.039 11:05:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2505788 00:05:07.039 11:05:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2505788 /var/tmp/spdk.sock 00:05:07.039 11:05:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:07.039 11:05:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2505788 ']' 00:05:07.039 11:05:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.039 11:05:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.039 11:05:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.039 11:05:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.039 11:05:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.039 [2024-11-20 11:05:59.602635] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:05:07.040 [2024-11-20 11:05:59.602696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2505788 ] 00:05:07.040 [2024-11-20 11:05:59.688973] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:07.040 [2024-11-20 11:05:59.689007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.040 [2024-11-20 11:05:59.724879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.983 11:06:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.983 11:06:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:07.983 11:06:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2505971 00:05:07.983 11:06:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2505971 /var/tmp/spdk2.sock 00:05:07.983 11:06:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:07.983 11:06:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2505971 ']' 00:05:07.983 11:06:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:07.983 11:06:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.983 11:06:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:07.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:07.983 11:06:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.983 11:06:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.983 [2024-11-20 11:06:00.467594] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:05:07.983 [2024-11-20 11:06:00.467651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2505971 ] 00:05:07.983 [2024-11-20 11:06:00.556186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.983 [2024-11-20 11:06:00.618451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.556 11:06:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.556 11:06:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:08.556 11:06:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2505971 00:05:08.556 11:06:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2505971 00:05:08.556 11:06:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:09.128 lslocks: write error 00:05:09.128 11:06:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2505788 00:05:09.128 11:06:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2505788 ']' 00:05:09.128 11:06:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2505788 00:05:09.128 11:06:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:09.128 11:06:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.128 11:06:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2505788 00:05:09.128 11:06:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.128 11:06:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.128 11:06:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2505788' 00:05:09.128 killing process with pid 2505788 00:05:09.128 11:06:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2505788 00:05:09.128 11:06:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2505788 00:05:09.698 11:06:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2505971 00:05:09.698 11:06:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2505971 ']' 00:05:09.698 11:06:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2505971 00:05:09.698 11:06:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:09.698 11:06:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.698 11:06:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2505971 00:05:09.698 11:06:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.698 11:06:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.698 11:06:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2505971' 00:05:09.698 killing process with pid 2505971 00:05:09.698 11:06:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2505971 00:05:09.698 11:06:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2505971 00:05:09.959 00:05:09.959 real 0m2.909s 00:05:09.959 user 0m3.238s 00:05:09.959 sys 0m0.898s 00:05:09.959 11:06:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.959 11:06:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.959 ************************************ 00:05:09.959 END TEST locking_app_on_unlocked_coremask 00:05:09.959 ************************************ 00:05:09.959 11:06:02 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:09.959 11:06:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.959 11:06:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.959 11:06:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.959 ************************************ 00:05:09.959 START TEST locking_app_on_locked_coremask 00:05:09.959 ************************************ 00:05:09.960 11:06:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:09.960 11:06:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2506437 00:05:09.960 11:06:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2506437 /var/tmp/spdk.sock 00:05:09.960 11:06:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.960 11:06:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2506437 ']' 00:05:09.960 11:06:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.960 11:06:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.960 11:06:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.960 11:06:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.960 11:06:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.960 [2024-11-20 11:06:02.589961] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:05:09.960 [2024-11-20 11:06:02.590022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2506437 ] 00:05:09.960 [2024-11-20 11:06:02.676198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.221 [2024-11-20 11:06:02.716572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.793 11:06:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.793 11:06:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:10.793 11:06:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2506763 00:05:10.793 11:06:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2506763 /var/tmp/spdk2.sock 00:05:10.793 11:06:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:10.793 11:06:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:10.793 11:06:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2506763 /var/tmp/spdk2.sock 00:05:10.793 11:06:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:10.793 11:06:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:10.793 11:06:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:10.793 11:06:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:10.793 11:06:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2506763 /var/tmp/spdk2.sock 00:05:10.793 11:06:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2506763 ']' 00:05:10.793 11:06:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:10.793 11:06:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.793 11:06:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:10.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:10.793 11:06:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.793 11:06:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.793 [2024-11-20 11:06:03.444796] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:05:10.793 [2024-11-20 11:06:03.444851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2506763 ] 00:05:11.054 [2024-11-20 11:06:03.533014] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2506437 has claimed it. 00:05:11.054 [2024-11-20 11:06:03.533049] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:11.316 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2506763) - No such process 00:05:11.316 ERROR: process (pid: 2506763) is no longer running 00:05:11.316 11:06:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.316 11:06:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:11.316 11:06:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:11.316 11:06:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:11.316 11:06:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:11.316 11:06:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:11.316 11:06:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2506437 00:05:11.316 11:06:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2506437 00:05:11.576 11:06:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:11.837 lslocks: write error 00:05:11.837 11:06:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2506437 00:05:11.837 11:06:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2506437 ']' 00:05:11.837 11:06:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2506437 00:05:11.837 11:06:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:11.837 11:06:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.837 11:06:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2506437 00:05:11.837 11:06:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.837 11:06:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.837 11:06:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2506437' 00:05:11.837 killing process with pid 2506437 00:05:11.837 11:06:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2506437 00:05:11.837 11:06:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2506437 00:05:12.098 00:05:12.098 real 0m2.203s 00:05:12.098 user 0m2.483s 00:05:12.098 sys 0m0.632s 00:05:12.098 11:06:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.098 11:06:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.098 ************************************ 00:05:12.098 END TEST locking_app_on_locked_coremask 00:05:12.098 ************************************ 00:05:12.098 11:06:04 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:12.098 11:06:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.098 11:06:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.098 11:06:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.098 ************************************ 00:05:12.098 START TEST locking_overlapped_coremask 00:05:12.098 ************************************ 00:05:12.098 11:06:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:12.098 11:06:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2507049 00:05:12.098 11:06:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2507049 /var/tmp/spdk.sock 00:05:12.098 11:06:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:12.098 11:06:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2507049 ']' 00:05:12.098 11:06:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.098 11:06:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.098 11:06:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.098 11:06:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.098 11:06:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.358 [2024-11-20 11:06:04.866755] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:05:12.358 [2024-11-20 11:06:04.866810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2507049 ] 00:05:12.358 [2024-11-20 11:06:04.952833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:12.358 [2024-11-20 11:06:04.988802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.358 [2024-11-20 11:06:04.988948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.358 [2024-11-20 11:06:04.988950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.930 11:06:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.930 11:06:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:12.930 11:06:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2507150 00:05:12.930 11:06:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2507150 /var/tmp/spdk2.sock 00:05:12.930 11:06:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:12.930 11:06:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:12.930 11:06:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2507150 /var/tmp/spdk2.sock 00:05:12.930 11:06:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:13.191 11:06:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.191 11:06:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:13.191 11:06:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.191 11:06:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2507150 /var/tmp/spdk2.sock 00:05:13.191 11:06:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2507150 ']' 00:05:13.191 11:06:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:13.191 11:06:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.191 11:06:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:13.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:13.191 11:06:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.191 11:06:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.191 [2024-11-20 11:06:05.724360] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:05:13.191 [2024-11-20 11:06:05.724415] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2507150 ] 00:05:13.191 [2024-11-20 11:06:05.837135] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2507049 has claimed it. 00:05:13.191 [2024-11-20 11:06:05.837181] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:13.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2507150) - No such process 00:05:13.763 ERROR: process (pid: 2507150) is no longer running 00:05:13.763 11:06:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.763 11:06:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:13.763 11:06:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:13.763 11:06:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:13.763 11:06:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:13.763 11:06:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:13.763 11:06:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:13.763 11:06:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:13.763 11:06:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:13.763 11:06:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:13.763 11:06:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2507049 00:05:13.763 11:06:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2507049 ']' 00:05:13.763 11:06:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2507049 00:05:13.763 11:06:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:13.763 11:06:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.763 11:06:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2507049 00:05:13.763 11:06:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:13.763 11:06:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:13.763 11:06:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2507049' 00:05:13.763 killing process with pid 2507049 00:05:13.763 11:06:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2507049 00:05:13.763 11:06:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2507049 00:05:14.023 00:05:14.023 real 0m1.784s 00:05:14.023 user 0m5.150s 00:05:14.023 sys 0m0.410s 00:05:14.023 11:06:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.023 11:06:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.023 ************************************ 00:05:14.023 END TEST locking_overlapped_coremask 00:05:14.023 ************************************ 00:05:14.023 11:06:06 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:14.023 11:06:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.023 11:06:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.023 11:06:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.023 ************************************ 00:05:14.023 START TEST locking_overlapped_coremask_via_rpc 00:05:14.023 ************************************ 00:05:14.023 11:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:14.023 11:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2507504 00:05:14.023 11:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2507504 /var/tmp/spdk.sock 00:05:14.024 11:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:14.024 11:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2507504 ']' 00:05:14.024 11:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.024 11:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.024 11:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.024 11:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.024 11:06:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.024 [2024-11-20 11:06:06.726569] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:05:14.024 [2024-11-20 11:06:06.726622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2507504 ] 00:05:14.285 [2024-11-20 11:06:06.812623] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:14.285 [2024-11-20 11:06:06.812649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:14.285 [2024-11-20 11:06:06.848237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.285 [2024-11-20 11:06:06.848496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.285 [2024-11-20 11:06:06.848497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:14.862 11:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.862 11:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:14.862 11:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2507521 00:05:14.862 11:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2507521 /var/tmp/spdk2.sock 00:05:14.862 11:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2507521 ']' 00:05:14.862 11:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:14.862 11:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:14.862 11:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.862 11:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:14.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:14.862 11:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.862 11:06:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.862 [2024-11-20 11:06:07.584844] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:05:14.862 [2024-11-20 11:06:07.584896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2507521 ] 00:05:15.125 [2024-11-20 11:06:07.697987] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:15.125 [2024-11-20 11:06:07.698017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:15.125 [2024-11-20 11:06:07.776329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:15.125 [2024-11-20 11:06:07.776451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.125 [2024-11-20 11:06:07.776453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:15.697 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.697 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:15.697 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:15.697 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.697 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.697 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.697 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:15.697 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:15.697 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:15.697 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:15.697 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:15.697 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:15.697 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:15.697 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:15.697 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.697 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.697 [2024-11-20 11:06:08.387237] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2507504 has claimed it. 00:05:15.697 request: 00:05:15.697 { 00:05:15.697 "method": "framework_enable_cpumask_locks", 00:05:15.697 "req_id": 1 00:05:15.697 } 00:05:15.697 Got JSON-RPC error response 00:05:15.697 response: 00:05:15.697 { 00:05:15.697 "code": -32603, 00:05:15.697 "message": "Failed to claim CPU core: 2" 00:05:15.697 } 00:05:15.697 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:15.697 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:15.697 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:15.697 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:15.697 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:15.697 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2507504 /var/tmp/spdk.sock 00:05:15.697 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2507504 ']' 00:05:15.697 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.697 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.697 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.697 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.697 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.959 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.959 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:15.959 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2507521 /var/tmp/spdk2.sock 00:05:15.959 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2507521 ']' 00:05:15.959 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:15.959 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.959 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:15.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:15.959 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.959 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.221 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.221 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:16.221 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:16.221 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:16.221 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:16.221 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:16.221 00:05:16.221 real 0m2.103s 00:05:16.221 user 0m0.880s 00:05:16.221 sys 0m0.148s 00:05:16.221 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.221 11:06:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.221 ************************************ 00:05:16.221 END TEST locking_overlapped_coremask_via_rpc 00:05:16.221 ************************************ 00:05:16.221 11:06:08 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:16.221 11:06:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2507504 ]] 00:05:16.221 11:06:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2507504 00:05:16.221 11:06:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2507504 ']' 00:05:16.221 11:06:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2507504 00:05:16.221 11:06:08 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:16.221 11:06:08 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.221 11:06:08 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2507504 00:05:16.221 11:06:08 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:16.221 11:06:08 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:16.221 11:06:08 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2507504' 00:05:16.221 killing process with pid 2507504 00:05:16.221 11:06:08 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2507504 00:05:16.221 11:06:08 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2507504 00:05:16.482 11:06:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2507521 ]] 00:05:16.482 11:06:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2507521 00:05:16.482 11:06:09 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2507521 ']' 00:05:16.482 11:06:09 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2507521 00:05:16.482 11:06:09 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:16.482 11:06:09 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.482 11:06:09 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2507521 00:05:16.482 11:06:09 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:16.482 11:06:09 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:16.482 11:06:09 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2507521' 00:05:16.482 killing process with pid 2507521 00:05:16.482 11:06:09 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2507521 00:05:16.482 11:06:09 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2507521 00:05:16.743 11:06:09 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:16.743 11:06:09 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:16.743 11:06:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2507504 ]] 00:05:16.743 11:06:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2507504 00:05:16.743 11:06:09 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2507504 ']' 00:05:16.743 11:06:09 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2507504 00:05:16.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2507504) - No such process 00:05:16.743 11:06:09 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2507504 is not found' 00:05:16.743 Process with pid 2507504 is not found 00:05:16.743 11:06:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2507521 ]] 00:05:16.743 11:06:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2507521 00:05:16.743 11:06:09 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2507521 ']' 00:05:16.743 11:06:09 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2507521 00:05:16.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2507521) - No such process 00:05:16.743 11:06:09 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2507521 is not found' 00:05:16.743 Process with pid 2507521 is not found 00:05:16.743 11:06:09 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:16.743 00:05:16.743 real 0m16.274s 00:05:16.743 user 0m28.405s 00:05:16.743 sys 0m5.003s 00:05:16.743 11:06:09 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.743 11:06:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.743 ************************************ 00:05:16.743 END TEST cpu_locks 00:05:16.743 ************************************ 00:05:16.743 00:05:16.743 real 0m42.049s 00:05:16.743 user 1m22.280s 00:05:16.743 sys 0m8.416s 00:05:16.743 11:06:09 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.743 11:06:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.743 ************************************ 00:05:16.743 END TEST event 00:05:16.743 ************************************ 00:05:16.743 11:06:09 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:16.743 11:06:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.743 11:06:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.743 11:06:09 -- common/autotest_common.sh@10 -- # set +x 00:05:16.743 ************************************ 00:05:16.743 START TEST thread 00:05:16.743 ************************************ 00:05:16.743 11:06:09 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:17.004 * Looking for test storage... 00:05:17.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:17.004 11:06:09 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:17.004 11:06:09 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:17.004 11:06:09 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:17.004 11:06:09 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:17.004 11:06:09 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.004 11:06:09 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.004 11:06:09 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.004 11:06:09 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.004 11:06:09 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.004 11:06:09 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.004 11:06:09 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.004 11:06:09 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.004 11:06:09 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.004 11:06:09 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.004 11:06:09 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.004 11:06:09 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:17.004 11:06:09 thread -- scripts/common.sh@345 -- # : 1 00:05:17.004 11:06:09 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.004 11:06:09 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.004 11:06:09 thread -- scripts/common.sh@365 -- # decimal 1 00:05:17.004 11:06:09 thread -- scripts/common.sh@353 -- # local d=1 00:05:17.004 11:06:09 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.004 11:06:09 thread -- scripts/common.sh@355 -- # echo 1 00:05:17.004 11:06:09 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.004 11:06:09 thread -- scripts/common.sh@366 -- # decimal 2 00:05:17.004 11:06:09 thread -- scripts/common.sh@353 -- # local d=2 00:05:17.004 11:06:09 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.004 11:06:09 thread -- scripts/common.sh@355 -- # echo 2 00:05:17.004 11:06:09 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.004 11:06:09 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.004 11:06:09 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.004 11:06:09 thread -- scripts/common.sh@368 -- # return 0 00:05:17.004 11:06:09 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.004 11:06:09 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:17.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.004 --rc genhtml_branch_coverage=1 00:05:17.004 --rc genhtml_function_coverage=1 00:05:17.004 --rc genhtml_legend=1 00:05:17.004 --rc geninfo_all_blocks=1 00:05:17.004 --rc geninfo_unexecuted_blocks=1 00:05:17.004 00:05:17.004 ' 00:05:17.004 11:06:09 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:17.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.004 --rc genhtml_branch_coverage=1 00:05:17.004 --rc genhtml_function_coverage=1 00:05:17.004 --rc genhtml_legend=1 00:05:17.004 --rc geninfo_all_blocks=1 00:05:17.004 --rc geninfo_unexecuted_blocks=1 00:05:17.004 00:05:17.004 ' 00:05:17.004 11:06:09 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:17.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.004 --rc genhtml_branch_coverage=1 00:05:17.004 --rc genhtml_function_coverage=1 00:05:17.004 --rc genhtml_legend=1 00:05:17.004 --rc geninfo_all_blocks=1 00:05:17.004 --rc geninfo_unexecuted_blocks=1 00:05:17.004 00:05:17.004 ' 00:05:17.004 11:06:09 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:17.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.004 --rc genhtml_branch_coverage=1 00:05:17.004 --rc genhtml_function_coverage=1 00:05:17.004 --rc genhtml_legend=1 00:05:17.004 --rc geninfo_all_blocks=1 00:05:17.004 --rc geninfo_unexecuted_blocks=1 00:05:17.004 00:05:17.004 ' 00:05:17.004 11:06:09 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:17.004 11:06:09 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:17.004 11:06:09 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.004 11:06:09 thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.004 ************************************ 00:05:17.004 START TEST thread_poller_perf 00:05:17.004 ************************************ 00:05:17.004 11:06:09 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:17.004 [2024-11-20 11:06:09.713073] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:05:17.004 [2024-11-20 11:06:09.713188] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2508405 ] 00:05:17.266 [2024-11-20 11:06:09.804137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.266 [2024-11-20 11:06:09.846550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.266 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:18.208 [2024-11-20T10:06:10.950Z] ====================================== 00:05:18.208 [2024-11-20T10:06:10.950Z] busy:2404714568 (cyc) 00:05:18.208 [2024-11-20T10:06:10.950Z] total_run_count: 411000 00:05:18.208 [2024-11-20T10:06:10.950Z] tsc_hz: 2400000000 (cyc) 00:05:18.208 [2024-11-20T10:06:10.950Z] ====================================== 00:05:18.208 [2024-11-20T10:06:10.950Z] poller_cost: 5850 (cyc), 2437 (nsec) 00:05:18.208 00:05:18.208 real 0m1.189s 00:05:18.208 user 0m1.094s 00:05:18.208 sys 0m0.090s 00:05:18.208 11:06:10 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.208 11:06:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:18.208 ************************************ 00:05:18.208 END TEST thread_poller_perf 00:05:18.208 ************************************ 00:05:18.208 11:06:10 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:18.208 11:06:10 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:18.208 11:06:10 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.208 11:06:10 thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.470 ************************************ 00:05:18.470 START TEST thread_poller_perf 00:05:18.470 ************************************ 00:05:18.470 11:06:10 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:18.470 [2024-11-20 11:06:10.979368] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:05:18.470 [2024-11-20 11:06:10.979477] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2508761 ] 00:05:18.470 [2024-11-20 11:06:11.069730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.470 [2024-11-20 11:06:11.106517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.470 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:19.412 [2024-11-20T10:06:12.154Z] ====================================== 00:05:19.412 [2024-11-20T10:06:12.154Z] busy:2401336930 (cyc) 00:05:19.412 [2024-11-20T10:06:12.154Z] total_run_count: 5551000 00:05:19.412 [2024-11-20T10:06:12.154Z] tsc_hz: 2400000000 (cyc) 00:05:19.412 [2024-11-20T10:06:12.154Z] ====================================== 00:05:19.412 [2024-11-20T10:06:12.154Z] poller_cost: 432 (cyc), 180 (nsec) 00:05:19.412 00:05:19.412 real 0m1.175s 00:05:19.412 user 0m1.086s 00:05:19.412 sys 0m0.083s 00:05:19.412 11:06:12 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.412 11:06:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:19.412 ************************************ 00:05:19.412 END TEST thread_poller_perf 00:05:19.412 ************************************ 00:05:19.672 11:06:12 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:19.672 00:05:19.673 real 0m2.727s 00:05:19.673 user 0m2.354s 00:05:19.673 sys 0m0.385s 00:05:19.673 11:06:12 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.673 11:06:12 thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.673 ************************************ 00:05:19.673 END TEST thread 00:05:19.673 ************************************ 00:05:19.673 11:06:12 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:19.673 11:06:12 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:19.673 11:06:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.673 11:06:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.673 11:06:12 -- common/autotest_common.sh@10 -- # set +x 00:05:19.673 ************************************ 00:05:19.673 START TEST app_cmdline 00:05:19.673 ************************************ 00:05:19.673 11:06:12 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:19.673 * Looking for test storage... 00:05:19.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:19.673 11:06:12 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:19.673 11:06:12 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:19.673 11:06:12 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:19.934 11:06:12 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:19.934 11:06:12 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.934 11:06:12 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.934 11:06:12 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.934 11:06:12 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.934 11:06:12 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.934 11:06:12 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.934 11:06:12 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.934 11:06:12 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.934 11:06:12 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.934 11:06:12 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.934 11:06:12 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.934 11:06:12 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:19.934 11:06:12 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:19.934 11:06:12 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.934 11:06:12 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.934 11:06:12 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:19.934 11:06:12 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:19.934 11:06:12 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.934 11:06:12 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:19.934 11:06:12 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.934 11:06:12 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:19.934 11:06:12 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:19.934 11:06:12 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.934 11:06:12 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:19.934 11:06:12 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.934 11:06:12 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.934 11:06:12 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.934 11:06:12 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:19.934 11:06:12 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.934 11:06:12 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:19.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.934 --rc genhtml_branch_coverage=1 00:05:19.934 --rc genhtml_function_coverage=1 00:05:19.934 --rc genhtml_legend=1 00:05:19.934 --rc geninfo_all_blocks=1 00:05:19.934 --rc geninfo_unexecuted_blocks=1 00:05:19.934 00:05:19.934 ' 00:05:19.934 11:06:12 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:19.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.934 --rc genhtml_branch_coverage=1 00:05:19.934 --rc genhtml_function_coverage=1 00:05:19.934 --rc genhtml_legend=1 00:05:19.934 --rc geninfo_all_blocks=1 00:05:19.934 --rc geninfo_unexecuted_blocks=1 00:05:19.934 00:05:19.934 ' 00:05:19.934 11:06:12 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:19.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.934 --rc genhtml_branch_coverage=1 00:05:19.934 --rc genhtml_function_coverage=1 00:05:19.934 --rc genhtml_legend=1 00:05:19.934 --rc geninfo_all_blocks=1 00:05:19.934 --rc geninfo_unexecuted_blocks=1 00:05:19.934 00:05:19.934 ' 00:05:19.934 11:06:12 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:19.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.934 --rc genhtml_branch_coverage=1 00:05:19.934 --rc genhtml_function_coverage=1 00:05:19.934 --rc genhtml_legend=1 00:05:19.934 --rc geninfo_all_blocks=1 00:05:19.934 --rc geninfo_unexecuted_blocks=1 00:05:19.934 00:05:19.934 ' 00:05:19.934 11:06:12 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:19.934 11:06:12 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2509181 00:05:19.934 11:06:12 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2509181 00:05:19.934 11:06:12 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:19.934 11:06:12 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2509181 ']' 00:05:19.934 11:06:12 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.934 11:06:12 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.934 11:06:12 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.934 11:06:12 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.934 11:06:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:19.934 [2024-11-20 11:06:12.507074] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:05:19.934 [2024-11-20 11:06:12.507143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2509181 ] 00:05:19.934 [2024-11-20 11:06:12.595581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.934 [2024-11-20 11:06:12.630320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.877 11:06:13 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.877 11:06:13 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:20.877 11:06:13 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:20.877 { 00:05:20.877 "version": "SPDK v25.01-pre git sha1 4d3e9954d", 00:05:20.877 "fields": { 00:05:20.877 "major": 25, 00:05:20.877 "minor": 1, 00:05:20.877 "patch": 0, 00:05:20.877 "suffix": "-pre", 00:05:20.877 "commit": "4d3e9954d" 00:05:20.877 } 00:05:20.877 } 00:05:20.877 11:06:13 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:20.877 11:06:13 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:20.877 11:06:13 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:20.877 11:06:13 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:20.877 11:06:13 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:20.877 11:06:13 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.877 11:06:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:20.877 11:06:13 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:20.877 11:06:13 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:20.877 11:06:13 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.877 11:06:13 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:20.877 11:06:13 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:20.877 11:06:13 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:20.877 11:06:13 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:20.877 11:06:13 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:20.877 11:06:13 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:20.877 11:06:13 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:20.877 11:06:13 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:20.877 11:06:13 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:20.877 11:06:13 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:20.877 11:06:13 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:20.877 11:06:13 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:20.877 11:06:13 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:20.877 11:06:13 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:21.138 request: 00:05:21.138 { 00:05:21.138 "method": "env_dpdk_get_mem_stats", 00:05:21.138 "req_id": 1 00:05:21.138 } 00:05:21.138 Got JSON-RPC error response 00:05:21.138 response: 00:05:21.138 { 00:05:21.138 "code": -32601, 00:05:21.138 "message": "Method not found" 00:05:21.138 } 00:05:21.138 11:06:13 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:21.138 11:06:13 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:21.138 11:06:13 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:21.138 11:06:13 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:21.138 11:06:13 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2509181 00:05:21.138 11:06:13 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2509181 ']' 00:05:21.138 11:06:13 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2509181 00:05:21.138 11:06:13 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:21.138 11:06:13 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.138 11:06:13 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2509181 00:05:21.138 11:06:13 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.138 11:06:13 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.138 11:06:13 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2509181' 00:05:21.138 killing process with pid 2509181 00:05:21.138 11:06:13 app_cmdline -- common/autotest_common.sh@973 -- # kill 2509181 00:05:21.138 11:06:13 app_cmdline -- common/autotest_common.sh@978 -- # wait 2509181 00:05:21.399 00:05:21.399 real 0m1.687s 00:05:21.399 user 0m2.029s 00:05:21.399 sys 0m0.445s 00:05:21.399 11:06:13 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.399 11:06:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:21.399 ************************************ 00:05:21.399 END TEST app_cmdline 00:05:21.399 ************************************ 00:05:21.399 11:06:13 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:21.399 11:06:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.399 11:06:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.399 11:06:13 -- common/autotest_common.sh@10 -- # set +x 00:05:21.399 ************************************ 00:05:21.399 START TEST version 00:05:21.399 ************************************ 00:05:21.399 11:06:14 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:21.399 * Looking for test storage... 00:05:21.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:21.399 11:06:14 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:21.399 11:06:14 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:21.399 11:06:14 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:21.660 11:06:14 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:21.660 11:06:14 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.660 11:06:14 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.660 11:06:14 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.660 11:06:14 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.660 11:06:14 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.660 11:06:14 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.660 11:06:14 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.660 11:06:14 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.660 11:06:14 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.660 11:06:14 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.660 11:06:14 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.660 11:06:14 version -- scripts/common.sh@344 -- # case "$op" in 00:05:21.660 11:06:14 version -- scripts/common.sh@345 -- # : 1 00:05:21.660 11:06:14 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.660 11:06:14 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.660 11:06:14 version -- scripts/common.sh@365 -- # decimal 1 00:05:21.660 11:06:14 version -- scripts/common.sh@353 -- # local d=1 00:05:21.660 11:06:14 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.660 11:06:14 version -- scripts/common.sh@355 -- # echo 1 00:05:21.660 11:06:14 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.660 11:06:14 version -- scripts/common.sh@366 -- # decimal 2 00:05:21.660 11:06:14 version -- scripts/common.sh@353 -- # local d=2 00:05:21.660 11:06:14 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.660 11:06:14 version -- scripts/common.sh@355 -- # echo 2 00:05:21.660 11:06:14 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.660 11:06:14 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.660 11:06:14 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.660 11:06:14 version -- scripts/common.sh@368 -- # return 0 00:05:21.660 11:06:14 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.660 11:06:14 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:21.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.660 --rc genhtml_branch_coverage=1 00:05:21.660 --rc genhtml_function_coverage=1 00:05:21.660 --rc genhtml_legend=1 00:05:21.660 --rc geninfo_all_blocks=1 00:05:21.660 --rc geninfo_unexecuted_blocks=1 00:05:21.660 00:05:21.660 ' 00:05:21.660 11:06:14 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:21.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.660 --rc genhtml_branch_coverage=1 00:05:21.660 --rc genhtml_function_coverage=1 00:05:21.660 --rc genhtml_legend=1 00:05:21.660 --rc geninfo_all_blocks=1 00:05:21.660 --rc geninfo_unexecuted_blocks=1 00:05:21.660 00:05:21.660 ' 00:05:21.660 11:06:14 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:21.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.660 --rc genhtml_branch_coverage=1 00:05:21.660 --rc genhtml_function_coverage=1 00:05:21.660 --rc genhtml_legend=1 00:05:21.660 --rc geninfo_all_blocks=1 00:05:21.660 --rc geninfo_unexecuted_blocks=1 00:05:21.660 00:05:21.660 ' 00:05:21.660 11:06:14 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:21.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.660 --rc genhtml_branch_coverage=1 00:05:21.660 --rc genhtml_function_coverage=1 00:05:21.660 --rc genhtml_legend=1 00:05:21.660 --rc geninfo_all_blocks=1 00:05:21.660 --rc geninfo_unexecuted_blocks=1 00:05:21.660 00:05:21.660 ' 00:05:21.660 11:06:14 version -- app/version.sh@17 -- # get_header_version major 00:05:21.660 11:06:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:21.660 11:06:14 version -- app/version.sh@14 -- # cut -f2 00:05:21.660 11:06:14 version -- app/version.sh@14 -- # tr -d '"' 00:05:21.660 11:06:14 version -- app/version.sh@17 -- # major=25 00:05:21.660 11:06:14 version -- app/version.sh@18 -- # get_header_version minor 00:05:21.660 11:06:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:21.660 11:06:14 version -- app/version.sh@14 -- # cut -f2 00:05:21.660 11:06:14 version -- app/version.sh@14 -- # tr -d '"' 00:05:21.660 11:06:14 version -- app/version.sh@18 -- # minor=1 00:05:21.660 11:06:14 version -- app/version.sh@19 -- # get_header_version patch 00:05:21.660 11:06:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:21.660 11:06:14 version -- app/version.sh@14 -- # cut -f2 00:05:21.660 11:06:14 version -- app/version.sh@14 -- # tr -d '"' 00:05:21.660 11:06:14 version -- app/version.sh@19 -- # patch=0 00:05:21.660 11:06:14 version -- app/version.sh@20 -- # get_header_version suffix 00:05:21.660 11:06:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:21.660 11:06:14 version -- app/version.sh@14 -- # cut -f2 00:05:21.660 11:06:14 version -- app/version.sh@14 -- # tr -d '"' 00:05:21.660 11:06:14 version -- app/version.sh@20 -- # suffix=-pre 00:05:21.660 11:06:14 version -- app/version.sh@22 -- # version=25.1 00:05:21.660 11:06:14 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:21.660 11:06:14 version -- app/version.sh@28 -- # version=25.1rc0 00:05:21.660 11:06:14 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:21.660 11:06:14 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:21.660 11:06:14 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:21.660 11:06:14 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:21.660 00:05:21.660 real 0m0.274s 00:05:21.660 user 0m0.170s 00:05:21.660 sys 0m0.151s 00:05:21.660 11:06:14 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.660 11:06:14 version -- common/autotest_common.sh@10 -- # set +x 00:05:21.660 ************************************ 00:05:21.660 END TEST version 00:05:21.660 ************************************ 00:05:21.660 11:06:14 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:21.660 11:06:14 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:21.660 11:06:14 -- spdk/autotest.sh@194 -- # uname -s 00:05:21.660 11:06:14 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:21.660 11:06:14 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:21.660 11:06:14 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:21.660 11:06:14 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:21.660 11:06:14 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:21.660 11:06:14 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:21.660 11:06:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:21.660 11:06:14 -- common/autotest_common.sh@10 -- # set +x 00:05:21.660 11:06:14 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:21.660 11:06:14 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:21.660 11:06:14 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:21.660 11:06:14 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:21.660 11:06:14 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:21.660 11:06:14 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:21.660 11:06:14 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:21.660 11:06:14 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:21.660 11:06:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.661 11:06:14 -- common/autotest_common.sh@10 -- # set +x 00:05:21.922 ************************************ 00:05:21.922 START TEST nvmf_tcp 00:05:21.922 ************************************ 00:05:21.922 11:06:14 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:21.922 * Looking for test storage... 00:05:21.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:21.922 11:06:14 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:21.922 11:06:14 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:21.922 11:06:14 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:21.922 11:06:14 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:21.922 11:06:14 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.922 11:06:14 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.922 11:06:14 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.922 11:06:14 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.922 11:06:14 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.923 11:06:14 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.923 11:06:14 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.923 11:06:14 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.923 11:06:14 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.923 11:06:14 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.923 11:06:14 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.923 11:06:14 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:21.923 11:06:14 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:21.923 11:06:14 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.923 11:06:14 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.923 11:06:14 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:21.923 11:06:14 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:21.923 11:06:14 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.923 11:06:14 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:21.923 11:06:14 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.923 11:06:14 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:21.923 11:06:14 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:21.923 11:06:14 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.923 11:06:14 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:21.923 11:06:14 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.923 11:06:14 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.923 11:06:14 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.923 11:06:14 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:21.923 11:06:14 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.923 11:06:14 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:21.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.923 --rc genhtml_branch_coverage=1 00:05:21.923 --rc genhtml_function_coverage=1 00:05:21.923 --rc genhtml_legend=1 00:05:21.923 --rc geninfo_all_blocks=1 00:05:21.923 --rc geninfo_unexecuted_blocks=1 00:05:21.923 00:05:21.923 ' 00:05:21.923 11:06:14 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:21.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.923 --rc genhtml_branch_coverage=1 00:05:21.923 --rc genhtml_function_coverage=1 00:05:21.923 --rc genhtml_legend=1 00:05:21.923 --rc geninfo_all_blocks=1 00:05:21.923 --rc geninfo_unexecuted_blocks=1 00:05:21.923 00:05:21.923 ' 00:05:21.923 11:06:14 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:21.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.923 --rc genhtml_branch_coverage=1 00:05:21.923 --rc genhtml_function_coverage=1 00:05:21.923 --rc genhtml_legend=1 00:05:21.923 --rc geninfo_all_blocks=1 00:05:21.923 --rc geninfo_unexecuted_blocks=1 00:05:21.923 00:05:21.923 ' 00:05:21.923 11:06:14 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:21.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.923 --rc genhtml_branch_coverage=1 00:05:21.923 --rc genhtml_function_coverage=1 00:05:21.923 --rc genhtml_legend=1 00:05:21.923 --rc geninfo_all_blocks=1 00:05:21.923 --rc geninfo_unexecuted_blocks=1 00:05:21.923 00:05:21.923 ' 00:05:21.923 11:06:14 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:21.923 11:06:14 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:21.923 11:06:14 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:21.923 11:06:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:21.923 11:06:14 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.923 11:06:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:22.185 ************************************ 00:05:22.185 START TEST nvmf_target_core 00:05:22.185 ************************************ 00:05:22.185 11:06:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:22.185 * Looking for test storage... 00:05:22.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:22.185 11:06:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:22.185 11:06:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:05:22.185 11:06:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:22.185 11:06:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:22.185 11:06:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.185 11:06:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.185 11:06:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.185 11:06:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:22.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.186 --rc genhtml_branch_coverage=1 00:05:22.186 --rc genhtml_function_coverage=1 00:05:22.186 --rc genhtml_legend=1 00:05:22.186 --rc geninfo_all_blocks=1 00:05:22.186 --rc geninfo_unexecuted_blocks=1 00:05:22.186 00:05:22.186 ' 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:22.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.186 --rc genhtml_branch_coverage=1 00:05:22.186 --rc genhtml_function_coverage=1 00:05:22.186 --rc genhtml_legend=1 00:05:22.186 --rc geninfo_all_blocks=1 00:05:22.186 --rc geninfo_unexecuted_blocks=1 00:05:22.186 00:05:22.186 ' 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:22.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.186 --rc genhtml_branch_coverage=1 00:05:22.186 --rc genhtml_function_coverage=1 00:05:22.186 --rc genhtml_legend=1 00:05:22.186 --rc geninfo_all_blocks=1 00:05:22.186 --rc geninfo_unexecuted_blocks=1 00:05:22.186 00:05:22.186 ' 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:22.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.186 --rc genhtml_branch_coverage=1 00:05:22.186 --rc genhtml_function_coverage=1 00:05:22.186 --rc genhtml_legend=1 00:05:22.186 --rc geninfo_all_blocks=1 00:05:22.186 --rc geninfo_unexecuted_blocks=1 00:05:22.186 00:05:22.186 ' 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:22.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.186 11:06:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:22.449 ************************************ 00:05:22.449 START TEST nvmf_abort 00:05:22.449 ************************************ 00:05:22.449 11:06:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:22.449 * Looking for test storage... 00:05:22.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:22.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.449 --rc genhtml_branch_coverage=1 00:05:22.449 --rc genhtml_function_coverage=1 00:05:22.449 --rc genhtml_legend=1 00:05:22.449 --rc geninfo_all_blocks=1 00:05:22.449 --rc geninfo_unexecuted_blocks=1 00:05:22.449 00:05:22.449 ' 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:22.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.449 --rc genhtml_branch_coverage=1 00:05:22.449 --rc genhtml_function_coverage=1 00:05:22.449 --rc genhtml_legend=1 00:05:22.449 --rc geninfo_all_blocks=1 00:05:22.449 --rc geninfo_unexecuted_blocks=1 00:05:22.449 00:05:22.449 ' 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:22.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.449 --rc genhtml_branch_coverage=1 00:05:22.449 --rc genhtml_function_coverage=1 00:05:22.449 --rc genhtml_legend=1 00:05:22.449 --rc geninfo_all_blocks=1 00:05:22.449 --rc geninfo_unexecuted_blocks=1 00:05:22.449 00:05:22.449 ' 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:22.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.449 --rc genhtml_branch_coverage=1 00:05:22.449 --rc genhtml_function_coverage=1 00:05:22.449 --rc genhtml_legend=1 00:05:22.449 --rc geninfo_all_blocks=1 00:05:22.449 --rc geninfo_unexecuted_blocks=1 00:05:22.449 00:05:22.449 ' 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:22.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:22.449 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:22.450 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:22.450 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:22.450 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:22.450 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:22.450 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:22.450 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:22.450 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:22.450 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:22.450 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:22.450 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:22.450 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:22.450 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:22.450 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:22.450 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:22.450 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:22.450 11:06:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:30.596 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:30.596 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:30.596 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:30.596 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:30.596 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:30.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:30.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.695 ms 00:05:30.597 00:05:30.597 --- 10.0.0.2 ping statistics --- 00:05:30.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:30.597 rtt min/avg/max/mdev = 0.695/0.695/0.695/0.000 ms 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:30.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:30.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:05:30.597 00:05:30.597 --- 10.0.0.1 ping statistics --- 00:05:30.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:30.597 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2513667 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2513667 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2513667 ']' 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.597 11:06:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:30.597 [2024-11-20 11:06:22.567473] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:05:30.597 [2024-11-20 11:06:22.567535] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:30.597 [2024-11-20 11:06:22.667221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:30.597 [2024-11-20 11:06:22.720306] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:30.597 [2024-11-20 11:06:22.720353] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:30.597 [2024-11-20 11:06:22.720363] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:30.597 [2024-11-20 11:06:22.720370] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:30.597 [2024-11-20 11:06:22.720376] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:30.597 [2024-11-20 11:06:22.722212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.597 [2024-11-20 11:06:22.722398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:30.597 [2024-11-20 11:06:22.722398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.859 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.859 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:30.859 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:30.859 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:30.859 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:30.859 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:30.859 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:30.859 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.859 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:30.859 [2024-11-20 11:06:23.449085] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:30.859 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.859 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:30.859 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.859 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:30.859 Malloc0 00:05:30.859 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.860 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:30.860 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.860 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:30.860 Delay0 00:05:30.860 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.860 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:30.860 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.860 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:30.860 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.860 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:30.860 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.860 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:30.860 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.860 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:30.860 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.860 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:30.860 [2024-11-20 11:06:23.533621] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:30.860 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.860 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:30.860 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.860 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:30.860 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.860 11:06:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:31.122 [2024-11-20 11:06:23.685331] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:33.180 Initializing NVMe Controllers 00:05:33.180 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:33.180 controller IO queue size 128 less than required 00:05:33.180 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:33.180 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:33.180 Initialization complete. Launching workers. 00:05:33.180 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 27984 00:05:33.180 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28045, failed to submit 62 00:05:33.180 success 27988, unsuccessful 57, failed 0 00:05:33.180 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:33.180 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.180 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:33.180 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.180 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:33.180 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:33.181 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:33.181 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:33.181 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:33.181 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:33.181 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:33.181 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:33.181 rmmod nvme_tcp 00:05:33.181 rmmod nvme_fabrics 00:05:33.181 rmmod nvme_keyring 00:05:33.441 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:33.441 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:33.441 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:33.441 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2513667 ']' 00:05:33.441 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2513667 00:05:33.441 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2513667 ']' 00:05:33.441 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2513667 00:05:33.441 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:33.441 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.441 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2513667 00:05:33.441 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:33.441 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:33.441 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2513667' 00:05:33.441 killing process with pid 2513667 00:05:33.441 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2513667 00:05:33.441 11:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2513667 00:05:33.441 11:06:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:33.441 11:06:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:33.441 11:06:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:33.441 11:06:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:33.441 11:06:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:33.441 11:06:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:33.441 11:06:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:33.441 11:06:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:33.441 11:06:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:33.441 11:06:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:33.441 11:06:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:33.441 11:06:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:36.005 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:36.005 00:05:36.005 real 0m13.270s 00:05:36.005 user 0m14.157s 00:05:36.005 sys 0m6.546s 00:05:36.005 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.005 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:36.005 ************************************ 00:05:36.005 END TEST nvmf_abort 00:05:36.005 ************************************ 00:05:36.005 11:06:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:36.005 11:06:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:36.005 11:06:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.005 11:06:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:36.005 ************************************ 00:05:36.005 START TEST nvmf_ns_hotplug_stress 00:05:36.005 ************************************ 00:05:36.005 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:36.005 * Looking for test storage... 00:05:36.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:36.005 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:36.005 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:05:36.005 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:36.005 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:36.005 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.005 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.005 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.005 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.005 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.005 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.005 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.005 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.005 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.005 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:36.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.006 --rc genhtml_branch_coverage=1 00:05:36.006 --rc genhtml_function_coverage=1 00:05:36.006 --rc genhtml_legend=1 00:05:36.006 --rc geninfo_all_blocks=1 00:05:36.006 --rc geninfo_unexecuted_blocks=1 00:05:36.006 00:05:36.006 ' 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:36.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.006 --rc genhtml_branch_coverage=1 00:05:36.006 --rc genhtml_function_coverage=1 00:05:36.006 --rc genhtml_legend=1 00:05:36.006 --rc geninfo_all_blocks=1 00:05:36.006 --rc geninfo_unexecuted_blocks=1 00:05:36.006 00:05:36.006 ' 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:36.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.006 --rc genhtml_branch_coverage=1 00:05:36.006 --rc genhtml_function_coverage=1 00:05:36.006 --rc genhtml_legend=1 00:05:36.006 --rc geninfo_all_blocks=1 00:05:36.006 --rc geninfo_unexecuted_blocks=1 00:05:36.006 00:05:36.006 ' 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:36.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.006 --rc genhtml_branch_coverage=1 00:05:36.006 --rc genhtml_function_coverage=1 00:05:36.006 --rc genhtml_legend=1 00:05:36.006 --rc geninfo_all_blocks=1 00:05:36.006 --rc geninfo_unexecuted_blocks=1 00:05:36.006 00:05:36.006 ' 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:36.006 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:36.006 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:36.007 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:36.007 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:36.007 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:36.007 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:36.007 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:36.007 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:36.007 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:36.007 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:36.007 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:36.007 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:36.007 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:36.007 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:36.007 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:36.007 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:36.007 11:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:44.149 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:44.149 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:44.149 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:44.149 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:44.149 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:44.150 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:44.150 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:44.150 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:44.150 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:44.150 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:44.150 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:44.150 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:44.150 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:44.150 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:44.150 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:44.150 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:44.150 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:44.150 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:44.150 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:44.150 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:44.150 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:44.150 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:44.150 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:44.150 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:44.150 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:44.150 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:44.150 11:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:44.150 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:44.150 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:44.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:44.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:05:44.150 00:05:44.150 --- 10.0.0.2 ping statistics --- 00:05:44.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:44.150 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:05:44.150 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:44.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:44.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:05:44.150 00:05:44.150 --- 10.0.0.1 ping statistics --- 00:05:44.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:44.150 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:05:44.150 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:44.150 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:44.150 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:44.150 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:44.150 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:44.150 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:44.150 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:44.150 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:44.150 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:44.150 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:44.150 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:44.150 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:44.150 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:44.150 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2518710 00:05:44.150 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2518710 00:05:44.150 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:44.150 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2518710 ']' 00:05:44.150 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.150 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.150 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.150 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.150 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:44.150 [2024-11-20 11:06:36.133352] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:05:44.150 [2024-11-20 11:06:36.133420] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:44.150 [2024-11-20 11:06:36.233005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:44.150 [2024-11-20 11:06:36.284136] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:44.150 [2024-11-20 11:06:36.284194] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:44.150 [2024-11-20 11:06:36.284203] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:44.150 [2024-11-20 11:06:36.284210] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:44.150 [2024-11-20 11:06:36.284216] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:44.150 [2024-11-20 11:06:36.286010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.150 [2024-11-20 11:06:36.286195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:44.150 [2024-11-20 11:06:36.286199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.412 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.412 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:44.412 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:44.412 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:44.412 11:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:44.412 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:44.412 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:44.412 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:44.673 [2024-11-20 11:06:37.170509] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:44.673 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:44.673 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:44.934 [2024-11-20 11:06:37.573574] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:44.934 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:45.195 11:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:45.456 Malloc0 00:05:45.456 11:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:45.717 Delay0 00:05:45.717 11:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.717 11:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:45.979 NULL1 00:05:45.979 11:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:46.240 11:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:46.240 11:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2519125 00:05:46.240 11:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:05:46.240 11:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.501 11:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.501 11:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:46.501 11:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:46.762 true 00:05:46.762 11:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:05:46.762 11:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.023 11:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.023 11:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:47.023 11:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:47.284 true 00:05:47.284 11:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:05:47.284 11:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.545 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.545 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:47.545 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:47.807 true 00:05:47.807 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:05:47.807 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.068 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.068 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:48.068 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:48.333 true 00:05:48.333 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:05:48.333 11:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.593 11:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.593 11:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:48.593 11:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:48.852 true 00:05:48.852 11:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:05:48.852 11:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.113 11:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.373 11:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:49.373 11:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:49.373 true 00:05:49.373 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:05:49.373 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.634 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.894 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:49.894 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:49.894 true 00:05:49.894 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:05:49.894 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.154 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.414 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:50.414 11:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:50.414 true 00:05:50.414 11:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:05:50.414 11:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.674 11:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.934 11:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:50.934 11:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:50.934 true 00:05:51.195 11:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:05:51.195 11:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.195 11:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.456 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:51.456 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:51.456 true 00:05:51.719 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:05:51.719 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.719 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.980 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:51.980 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:51.980 true 00:05:52.241 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:05:52.241 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.241 11:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.502 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:52.502 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:52.502 true 00:05:52.763 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:05:52.763 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.763 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.024 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:53.025 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:53.025 true 00:05:53.285 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:05:53.285 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.285 11:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.547 11:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:53.547 11:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:53.807 true 00:05:53.807 11:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:05:53.807 11:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.807 11:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.068 11:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:54.068 11:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:54.329 true 00:05:54.329 11:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:05:54.329 11:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.329 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.590 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:54.590 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:54.850 true 00:05:54.850 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:05:54.850 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.110 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.110 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:55.110 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:55.370 true 00:05:55.370 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:05:55.370 11:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.630 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.630 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:55.630 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:55.890 true 00:05:55.890 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:05:55.890 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.150 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.410 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:56.410 11:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:56.410 true 00:05:56.410 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:05:56.410 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.669 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.928 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:56.928 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:56.928 true 00:05:56.928 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:05:56.928 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.187 11:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.447 11:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:57.447 11:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:57.707 true 00:05:57.707 11:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:05:57.707 11:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.707 11:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.967 11:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:57.967 11:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:58.227 true 00:05:58.227 11:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:05:58.227 11:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.227 11:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.486 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:58.486 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:58.746 true 00:05:58.746 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:05:58.746 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.007 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.007 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:59.007 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:59.268 true 00:05:59.268 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:05:59.268 11:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.529 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.529 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:59.529 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:59.791 true 00:05:59.791 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:05:59.791 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.052 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.313 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:00.313 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:00.313 true 00:06:00.313 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:06:00.313 11:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.574 11:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.835 11:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:00.835 11:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:00.835 true 00:06:00.835 11:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:06:00.835 11:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.095 11:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.356 11:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:01.356 11:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:01.356 true 00:06:01.356 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:06:01.356 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.617 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.878 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:01.878 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:01.878 true 00:06:01.878 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:06:01.878 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.138 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.398 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:02.398 11:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:02.398 true 00:06:02.658 11:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:06:02.658 11:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.658 11:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.919 11:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:02.919 11:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:03.179 true 00:06:03.179 11:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:06:03.179 11:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.179 11:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.441 11:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:03.441 11:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:03.700 true 00:06:03.700 11:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:06:03.700 11:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.700 11:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.961 11:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:03.961 11:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:04.221 true 00:06:04.221 11:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:06:04.221 11:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.481 11:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.481 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:04.481 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:04.742 true 00:06:04.742 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:06:04.742 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.003 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.003 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:05.003 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:05.263 true 00:06:05.263 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:06:05.263 11:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.524 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.524 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:05.524 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:05.783 true 00:06:05.784 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:06:05.784 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.044 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.044 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:06.044 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:06.306 true 00:06:06.306 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:06:06.306 11:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.566 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.566 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:06:06.566 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:06:06.826 true 00:06:06.826 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:06:06.826 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.086 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.346 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:06:07.346 11:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:06:07.346 true 00:06:07.346 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:06:07.346 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.606 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.866 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:06:07.866 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:06:07.866 true 00:06:07.866 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:06:07.866 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.127 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.388 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:06:08.388 11:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:06:08.388 true 00:06:08.388 11:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:06:08.388 11:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.649 11:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.909 11:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:06:08.909 11:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:06:08.909 true 00:06:09.170 11:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:06:09.170 11:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.170 11:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.429 11:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:06:09.429 11:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:06:09.429 true 00:06:09.690 11:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:06:09.690 11:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.690 11:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.951 11:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:06:09.951 11:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:06:10.211 true 00:06:10.211 11:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:06:10.211 11:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.211 11:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.473 11:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:06:10.473 11:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:06:10.732 true 00:06:10.732 11:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:06:10.732 11:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.992 11:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.992 11:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:06:10.992 11:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:06:11.252 true 00:06:11.252 11:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:06:11.252 11:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.513 11:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.513 11:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:06:11.513 11:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:06:11.773 true 00:06:11.773 11:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:06:11.773 11:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.033 11:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.294 11:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:06:12.294 11:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:06:12.294 true 00:06:12.294 11:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:06:12.294 11:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.555 11:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.816 11:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:06:12.816 11:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:06:12.816 true 00:06:12.816 11:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:06:12.816 11:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.078 11:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.338 11:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:06:13.338 11:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:06:13.338 true 00:06:13.338 11:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:06:13.338 11:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.598 11:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.859 11:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:06:13.859 11:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:06:13.859 true 00:06:14.120 11:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:06:14.120 11:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.120 11:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.380 11:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:06:14.380 11:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:06:14.380 true 00:06:14.641 11:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:06:14.641 11:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.641 11:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.901 11:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:06:14.901 11:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:06:15.161 true 00:06:15.161 11:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:06:15.161 11:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.161 11:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.422 11:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:06:15.422 11:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:06:15.778 true 00:06:15.778 11:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:06:15.778 11:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.778 11:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.068 11:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:06:16.068 11:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:06:16.068 true 00:06:16.068 11:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:06:16.333 11:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.333 11:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.333 Initializing NVMe Controllers 00:06:16.333 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:16.333 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:06:16.333 Controller IO queue size 128, less than required. 00:06:16.333 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:16.333 WARNING: Some requested NVMe devices were skipped 00:06:16.333 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:16.333 Initialization complete. Launching workers. 00:06:16.333 ======================================================== 00:06:16.333 Latency(us) 00:06:16.333 Device Information : IOPS MiB/s Average min max 00:06:16.333 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30840.91 15.06 4150.35 1134.16 8322.26 00:06:16.333 ======================================================== 00:06:16.333 Total : 30840.91 15.06 4150.35 1134.16 8322.26 00:06:16.333 00:06:16.593 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:06:16.593 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:06:16.593 true 00:06:16.854 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2519125 00:06:16.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2519125) - No such process 00:06:16.854 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2519125 00:06:16.854 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.854 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:17.115 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:17.115 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:17.115 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:17.115 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:17.115 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:17.377 null0 00:06:17.377 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:17.377 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:17.377 11:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:17.377 null1 00:06:17.377 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:17.377 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:17.377 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:17.638 null2 00:06:17.638 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:17.638 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:17.638 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:17.900 null3 00:06:17.900 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:17.900 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:17.900 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:17.900 null4 00:06:17.900 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:17.900 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:17.900 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:18.161 null5 00:06:18.161 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:18.161 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:18.161 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:18.422 null6 00:06:18.422 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:18.422 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:18.422 11:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:18.422 null7 00:06:18.422 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:18.422 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:18.422 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:18.422 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:18.422 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:18.422 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:18.422 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:18.422 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:18.422 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.422 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:18.422 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:18.422 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:18.422 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:18.422 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:18.422 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:18.422 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:18.422 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:18.422 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:18.422 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.422 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:18.422 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:18.422 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:18.422 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:18.422 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:18.422 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:18.422 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:18.422 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.422 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:18.422 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:18.423 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:18.423 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2525908 2525911 2525913 2525916 2525918 2525921 2525923 2525926 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:18.685 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:18.947 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.947 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.947 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:18.947 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.947 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.947 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:18.947 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.947 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.947 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:18.947 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.947 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.947 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:18.947 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.947 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.947 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:18.947 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.947 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.947 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:18.947 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.947 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.947 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:18.947 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.947 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.947 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:19.209 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:19.209 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.209 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:19.209 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:19.209 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:19.209 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:19.209 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:19.209 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:19.209 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.209 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.209 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:19.209 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.209 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.209 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:19.209 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.209 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.209 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:19.209 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.209 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.209 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:19.209 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.209 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.209 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:19.209 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.209 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.209 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:19.209 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.209 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.209 11:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:19.470 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.470 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.470 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:19.470 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:19.470 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:19.470 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:19.470 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:19.470 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:19.470 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.470 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:19.470 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:19.732 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.732 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.732 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:19.732 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.732 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.732 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:19.732 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.732 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.732 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:19.732 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.732 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.732 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:19.732 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.732 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.732 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:19.732 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.732 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.732 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:19.732 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.732 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.732 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:19.732 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.732 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.732 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:19.732 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:19.732 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:19.994 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:19.994 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:19.994 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:19.994 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:19.994 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.994 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:19.994 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.994 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.994 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:19.994 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.994 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.994 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:19.994 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.994 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.994 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:19.994 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.994 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.994 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.994 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:19.994 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.994 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:19.994 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.994 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.994 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:20.255 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.255 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.255 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:20.255 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.255 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.255 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:20.255 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:20.255 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:20.255 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:20.255 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:20.255 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.255 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:20.255 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:20.255 11:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:20.517 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.517 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.517 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:20.517 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.517 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.517 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:20.517 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.517 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.517 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:20.517 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.517 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.517 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:20.517 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.517 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.517 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:20.517 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.517 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.517 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.517 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:20.517 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.517 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:20.517 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.517 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.517 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:20.517 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:20.517 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:20.517 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:20.517 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:20.778 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:20.778 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:20.778 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:20.778 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.778 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.778 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.778 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:20.778 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.778 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.778 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:20.778 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.778 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.778 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:20.778 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.778 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.778 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:20.778 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.778 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.778 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:20.778 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.778 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.778 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:20.778 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.778 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.778 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:20.778 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.778 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.778 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:21.040 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:21.040 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:21.040 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:21.040 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:21.040 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:21.040 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:21.040 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.040 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:21.040 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.040 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.040 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:21.040 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.040 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.040 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:21.040 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.040 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.305 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:21.305 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.305 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.305 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:21.305 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.305 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.305 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:21.305 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.305 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.305 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:21.305 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.305 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.305 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:21.305 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:21.305 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.305 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.305 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:21.305 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:21.305 11:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:21.305 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:21.305 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:21.305 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:21.567 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.567 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.567 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:21.567 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.567 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:21.567 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.567 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.567 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:21.567 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.567 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.567 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:21.567 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.567 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.567 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:21.567 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.567 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.567 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:21.567 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.567 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.567 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:21.567 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.567 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.567 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:21.567 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.567 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.567 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:21.567 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:21.567 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:21.827 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:21.827 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:21.827 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:21.827 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:21.828 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.828 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.828 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.828 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:21.828 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.828 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.828 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:21.828 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.828 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.828 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:21.828 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:22.089 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.089 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.089 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:22.089 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.089 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.089 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:22.089 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.089 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.089 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:22.089 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:22.089 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.089 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.089 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:22.089 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:22.089 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.089 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.089 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:22.089 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:22.089 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:22.089 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:22.089 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.089 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.350 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:22.350 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.350 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.350 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.350 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:22.350 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.350 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.350 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.350 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.350 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.350 11:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.350 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.350 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.350 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.350 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.350 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.350 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.350 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:22.350 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:22.350 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:22.350 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:22.350 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:22.350 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:22.350 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:22.350 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:22.350 rmmod nvme_tcp 00:06:22.611 rmmod nvme_fabrics 00:06:22.611 rmmod nvme_keyring 00:06:22.611 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:22.611 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:22.611 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:22.611 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2518710 ']' 00:06:22.611 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2518710 00:06:22.611 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2518710 ']' 00:06:22.611 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2518710 00:06:22.611 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:22.611 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.611 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2518710 00:06:22.611 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:22.611 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:22.611 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2518710' 00:06:22.611 killing process with pid 2518710 00:06:22.611 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2518710 00:06:22.611 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2518710 00:06:22.611 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:22.611 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:22.611 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:22.611 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:22.611 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:22.611 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:22.611 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:22.611 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:22.611 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:22.611 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:22.611 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:22.611 11:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:25.158 00:06:25.158 real 0m49.101s 00:06:25.158 user 3m20.333s 00:06:25.158 sys 0m17.494s 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:25.158 ************************************ 00:06:25.158 END TEST nvmf_ns_hotplug_stress 00:06:25.158 ************************************ 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:25.158 ************************************ 00:06:25.158 START TEST nvmf_delete_subsystem 00:06:25.158 ************************************ 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:25.158 * Looking for test storage... 00:06:25.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:25.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.158 --rc genhtml_branch_coverage=1 00:06:25.158 --rc genhtml_function_coverage=1 00:06:25.158 --rc genhtml_legend=1 00:06:25.158 --rc geninfo_all_blocks=1 00:06:25.158 --rc geninfo_unexecuted_blocks=1 00:06:25.158 00:06:25.158 ' 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:25.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.158 --rc genhtml_branch_coverage=1 00:06:25.158 --rc genhtml_function_coverage=1 00:06:25.158 --rc genhtml_legend=1 00:06:25.158 --rc geninfo_all_blocks=1 00:06:25.158 --rc geninfo_unexecuted_blocks=1 00:06:25.158 00:06:25.158 ' 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:25.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.158 --rc genhtml_branch_coverage=1 00:06:25.158 --rc genhtml_function_coverage=1 00:06:25.158 --rc genhtml_legend=1 00:06:25.158 --rc geninfo_all_blocks=1 00:06:25.158 --rc geninfo_unexecuted_blocks=1 00:06:25.158 00:06:25.158 ' 00:06:25.158 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:25.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.159 --rc genhtml_branch_coverage=1 00:06:25.159 --rc genhtml_function_coverage=1 00:06:25.159 --rc genhtml_legend=1 00:06:25.159 --rc geninfo_all_blocks=1 00:06:25.159 --rc geninfo_unexecuted_blocks=1 00:06:25.159 00:06:25.159 ' 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:25.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:25.159 11:07:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:33.300 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:33.300 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:33.300 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:33.300 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:33.300 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:33.300 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:33.300 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:33.300 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:33.300 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:33.300 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:33.300 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:33.300 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:33.300 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:33.300 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:33.300 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:33.301 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:33.301 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:33.301 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:33.301 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:33.301 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:33.302 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:33.302 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:33.302 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:33.302 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:33.302 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:33.302 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:33.302 11:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:33.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:33.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:06:33.302 00:06:33.302 --- 10.0.0.2 ping statistics --- 00:06:33.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:33.302 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:33.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:33.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:06:33.302 00:06:33.302 --- 10.0.0.1 ping statistics --- 00:06:33.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:33.302 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2531153 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2531153 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2531153 ']' 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:33.302 [2024-11-20 11:07:25.334667] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:06:33.302 [2024-11-20 11:07:25.334734] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:33.302 [2024-11-20 11:07:25.406585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:33.302 [2024-11-20 11:07:25.453251] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:33.302 [2024-11-20 11:07:25.453301] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:33.302 [2024-11-20 11:07:25.453307] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:33.302 [2024-11-20 11:07:25.453312] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:33.302 [2024-11-20 11:07:25.453317] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:33.302 [2024-11-20 11:07:25.456187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.302 [2024-11-20 11:07:25.456211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:33.302 [2024-11-20 11:07:25.608130] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:33.302 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.303 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:33.303 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.303 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:33.303 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.303 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:33.303 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.303 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:33.303 [2024-11-20 11:07:25.632454] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:33.303 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.303 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:33.303 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.303 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:33.303 NULL1 00:06:33.303 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.303 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:33.303 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.303 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:33.303 Delay0 00:06:33.303 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.303 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.303 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.303 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:33.303 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.303 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2531173 00:06:33.303 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:33.303 11:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:33.303 [2024-11-20 11:07:25.759442] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:35.218 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:35.218 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.218 11:07:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 starting I/O failed: -6 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 starting I/O failed: -6 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 starting I/O failed: -6 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 starting I/O failed: -6 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 starting I/O failed: -6 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 starting I/O failed: -6 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 starting I/O failed: -6 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 starting I/O failed: -6 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 starting I/O failed: -6 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 starting I/O failed: -6 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 starting I/O failed: -6 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 [2024-11-20 11:07:27.924494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d632c0 is same with the state(6) to be set 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 starting I/O failed: -6 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 starting I/O failed: -6 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 starting I/O failed: -6 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 starting I/O failed: -6 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 starting I/O failed: -6 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 starting I/O failed: -6 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 starting I/O failed: -6 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 starting I/O failed: -6 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 starting I/O failed: -6 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 starting I/O failed: -6 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 starting I/O failed: -6 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 starting I/O failed: -6 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 [2024-11-20 11:07:27.929301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3db4000c40 is same with the state(6) to be set 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Read completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:35.218 Write completed with error (sct=0, sc=8) 00:06:36.611 [2024-11-20 11:07:28.901361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d649a0 is same with the state(6) to be set 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Write completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Write completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Write completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Write completed with error (sct=0, sc=8) 00:06:36.611 [2024-11-20 11:07:28.927832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d634a0 is same with the state(6) to be set 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Write completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Write completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Write completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Write completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 [2024-11-20 11:07:28.928399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d63860 is same with the state(6) to be set 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Write completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Write completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Write completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Write completed with error (sct=0, sc=8) 00:06:36.611 Write completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Write completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Write completed with error (sct=0, sc=8) 00:06:36.611 [2024-11-20 11:07:28.931311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3db400d7c0 is same with the state(6) to be set 00:06:36.611 Write completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Write completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Write completed with error (sct=0, sc=8) 00:06:36.611 Write completed with error (sct=0, sc=8) 00:06:36.611 Write completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Write completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Write completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Write completed with error (sct=0, sc=8) 00:06:36.611 Write completed with error (sct=0, sc=8) 00:06:36.611 Write completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Write completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 Read completed with error (sct=0, sc=8) 00:06:36.611 [2024-11-20 11:07:28.931447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3db400d020 is same with the state(6) to be set 00:06:36.611 Initializing NVMe Controllers 00:06:36.611 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:36.611 Controller IO queue size 128, less than required. 00:06:36.611 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:36.612 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:36.612 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:36.612 Initialization complete. Launching workers. 00:06:36.612 ======================================================== 00:06:36.612 Latency(us) 00:06:36.612 Device Information : IOPS MiB/s Average min max 00:06:36.612 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 164.82 0.08 906673.73 389.72 1006477.15 00:06:36.612 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 174.28 0.09 887170.92 375.72 1011830.46 00:06:36.612 ======================================================== 00:06:36.612 Total : 339.10 0.17 896650.26 375.72 1011830.46 00:06:36.612 00:06:36.612 [2024-11-20 11:07:28.931951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d649a0 (9): Bad file descriptor 00:06:36.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:36.612 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.612 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:36.612 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2531173 00:06:36.612 11:07:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:36.873 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:36.873 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2531173 00:06:36.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2531173) - No such process 00:06:36.873 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2531173 00:06:36.873 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:36.873 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2531173 00:06:36.873 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:36.873 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.873 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:36.873 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.873 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2531173 00:06:36.873 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:36.873 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:36.873 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:36.873 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:36.873 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:36.873 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.873 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:36.873 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.873 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:36.873 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.873 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:36.873 [2024-11-20 11:07:29.464491] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:36.873 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.873 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.873 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.873 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:36.873 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.873 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2531860 00:06:36.873 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:36.873 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:36.873 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2531860 00:06:36.873 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:36.873 [2024-11-20 11:07:29.570277] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:37.445 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:37.445 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2531860 00:06:37.445 11:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:38.014 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:38.014 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2531860 00:06:38.014 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:38.274 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:38.274 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2531860 00:06:38.274 11:07:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:38.844 11:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:38.844 11:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2531860 00:06:38.844 11:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:39.415 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:39.415 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2531860 00:06:39.415 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:39.985 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:39.985 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2531860 00:06:39.985 11:07:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:40.246 Initializing NVMe Controllers 00:06:40.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:40.246 Controller IO queue size 128, less than required. 00:06:40.246 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:40.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:40.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:40.246 Initialization complete. Launching workers. 00:06:40.246 ======================================================== 00:06:40.246 Latency(us) 00:06:40.246 Device Information : IOPS MiB/s Average min max 00:06:40.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002298.96 1000184.98 1041558.21 00:06:40.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002988.83 1000225.07 1008117.26 00:06:40.246 ======================================================== 00:06:40.246 Total : 256.00 0.12 1002643.90 1000184.98 1041558.21 00:06:40.246 00:06:40.507 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:40.507 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2531860 00:06:40.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2531860) - No such process 00:06:40.507 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2531860 00:06:40.507 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:40.507 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:40.507 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:40.507 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:40.507 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:40.507 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:40.507 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:40.507 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:40.507 rmmod nvme_tcp 00:06:40.507 rmmod nvme_fabrics 00:06:40.507 rmmod nvme_keyring 00:06:40.507 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:40.507 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:40.507 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:40.507 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2531153 ']' 00:06:40.507 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2531153 00:06:40.507 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2531153 ']' 00:06:40.507 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2531153 00:06:40.507 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:40.507 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.507 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2531153 00:06:40.507 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.507 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.507 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2531153' 00:06:40.507 killing process with pid 2531153 00:06:40.507 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2531153 00:06:40.507 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2531153 00:06:40.768 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:40.768 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:40.768 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:40.768 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:40.768 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:40.768 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:40.768 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:40.768 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:40.768 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:40.768 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:40.768 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:40.768 11:07:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:42.701 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:42.701 00:06:42.701 real 0m17.852s 00:06:42.702 user 0m29.864s 00:06:42.702 sys 0m6.767s 00:06:42.702 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.702 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.702 ************************************ 00:06:42.702 END TEST nvmf_delete_subsystem 00:06:42.702 ************************************ 00:06:42.702 11:07:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:42.702 11:07:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:42.702 11:07:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.702 11:07:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:42.702 ************************************ 00:06:42.702 START TEST nvmf_host_management 00:06:42.702 ************************************ 00:06:42.702 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:42.962 * Looking for test storage... 00:06:42.962 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:42.962 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:42.962 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:42.962 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:42.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.963 --rc genhtml_branch_coverage=1 00:06:42.963 --rc genhtml_function_coverage=1 00:06:42.963 --rc genhtml_legend=1 00:06:42.963 --rc geninfo_all_blocks=1 00:06:42.963 --rc geninfo_unexecuted_blocks=1 00:06:42.963 00:06:42.963 ' 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:42.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.963 --rc genhtml_branch_coverage=1 00:06:42.963 --rc genhtml_function_coverage=1 00:06:42.963 --rc genhtml_legend=1 00:06:42.963 --rc geninfo_all_blocks=1 00:06:42.963 --rc geninfo_unexecuted_blocks=1 00:06:42.963 00:06:42.963 ' 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:42.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.963 --rc genhtml_branch_coverage=1 00:06:42.963 --rc genhtml_function_coverage=1 00:06:42.963 --rc genhtml_legend=1 00:06:42.963 --rc geninfo_all_blocks=1 00:06:42.963 --rc geninfo_unexecuted_blocks=1 00:06:42.963 00:06:42.963 ' 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:42.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.963 --rc genhtml_branch_coverage=1 00:06:42.963 --rc genhtml_function_coverage=1 00:06:42.963 --rc genhtml_legend=1 00:06:42.963 --rc geninfo_all_blocks=1 00:06:42.963 --rc geninfo_unexecuted_blocks=1 00:06:42.963 00:06:42.963 ' 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:42.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:42.963 11:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:51.111 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:51.111 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:51.111 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:51.112 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:51.112 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:51.112 11:07:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:51.112 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:51.112 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:51.112 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:51.112 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:51.112 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:51.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:51.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:06:51.112 00:06:51.112 --- 10.0.0.2 ping statistics --- 00:06:51.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.112 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:06:51.112 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:51.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:51.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:06:51.112 00:06:51.112 --- 10.0.0.1 ping statistics --- 00:06:51.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.112 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:06:51.112 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:51.112 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:51.112 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:51.112 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:51.112 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:51.112 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:51.112 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:51.112 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:51.112 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:51.112 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:51.112 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:51.112 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:51.112 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:51.112 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:51.112 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:51.112 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2536882 00:06:51.112 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2536882 00:06:51.112 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:51.112 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2536882 ']' 00:06:51.112 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.112 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.112 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.112 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.112 11:07:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:51.112 [2024-11-20 11:07:43.241904] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:06:51.112 [2024-11-20 11:07:43.241967] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.112 [2024-11-20 11:07:43.346318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:51.112 [2024-11-20 11:07:43.400141] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:51.112 [2024-11-20 11:07:43.400208] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:51.112 [2024-11-20 11:07:43.400217] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:51.112 [2024-11-20 11:07:43.400224] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:51.112 [2024-11-20 11:07:43.400231] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:51.112 [2024-11-20 11:07:43.402330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.112 [2024-11-20 11:07:43.402568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.112 [2024-11-20 11:07:43.402728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:51.112 [2024-11-20 11:07:43.402730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.374 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.374 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:51.374 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:51.374 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:51.374 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:51.743 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:51.743 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:51.743 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.743 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:51.743 [2024-11-20 11:07:44.124982] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.743 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.743 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:51.743 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:51.743 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:51.743 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:51.743 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:51.743 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:51.743 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.743 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:51.744 Malloc0 00:06:51.744 [2024-11-20 11:07:44.205896] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:51.744 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.744 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:51.744 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:51.744 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:51.744 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2537254 00:06:51.744 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2537254 /var/tmp/bdevperf.sock 00:06:51.744 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2537254 ']' 00:06:51.744 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:51.744 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.744 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:51.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:51.744 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.744 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:51.744 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:51.744 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:51.744 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:51.744 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:51.744 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:51.744 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:51.744 { 00:06:51.744 "params": { 00:06:51.744 "name": "Nvme$subsystem", 00:06:51.744 "trtype": "$TEST_TRANSPORT", 00:06:51.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:51.744 "adrfam": "ipv4", 00:06:51.744 "trsvcid": "$NVMF_PORT", 00:06:51.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:51.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:51.744 "hdgst": ${hdgst:-false}, 00:06:51.744 "ddgst": ${ddgst:-false} 00:06:51.744 }, 00:06:51.744 "method": "bdev_nvme_attach_controller" 00:06:51.744 } 00:06:51.744 EOF 00:06:51.744 )") 00:06:51.744 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:51.744 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:51.744 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:51.744 11:07:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:51.744 "params": { 00:06:51.744 "name": "Nvme0", 00:06:51.744 "trtype": "tcp", 00:06:51.744 "traddr": "10.0.0.2", 00:06:51.744 "adrfam": "ipv4", 00:06:51.744 "trsvcid": "4420", 00:06:51.744 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:51.744 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:51.744 "hdgst": false, 00:06:51.744 "ddgst": false 00:06:51.744 }, 00:06:51.744 "method": "bdev_nvme_attach_controller" 00:06:51.744 }' 00:06:51.744 [2024-11-20 11:07:44.317795] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:06:51.744 [2024-11-20 11:07:44.317873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2537254 ] 00:06:51.744 [2024-11-20 11:07:44.411463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.744 [2024-11-20 11:07:44.465097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.006 Running I/O for 10 seconds... 00:06:52.581 11:07:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.581 11:07:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:52.581 11:07:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:52.581 11:07:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.581 11:07:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:52.581 11:07:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.581 11:07:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:52.581 11:07:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:52.581 11:07:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:52.581 11:07:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:52.581 11:07:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:52.581 11:07:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:52.581 11:07:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:52.581 11:07:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:52.581 11:07:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:52.581 11:07:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.581 11:07:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:52.581 11:07:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:52.581 11:07:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.581 11:07:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=649 00:06:52.581 11:07:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 649 -ge 100 ']' 00:06:52.581 11:07:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:52.581 11:07:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:52.581 11:07:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:52.581 11:07:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:52.581 11:07:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.581 11:07:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:52.581 [2024-11-20 11:07:45.231928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.581 [2024-11-20 11:07:45.232040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.581 [2024-11-20 11:07:45.232049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.581 [2024-11-20 11:07:45.232067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.581 [2024-11-20 11:07:45.232074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.581 [2024-11-20 11:07:45.232081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.581 [2024-11-20 11:07:45.232088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.581 [2024-11-20 11:07:45.232095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x793130 is same with the state(6) to be set 00:06:52.582 [2024-11-20 11:07:45.232622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.582 [2024-11-20 11:07:45.232691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.582 [2024-11-20 11:07:45.232716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.582 [2024-11-20 11:07:45.232725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.582 [2024-11-20 11:07:45.232735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.582 [2024-11-20 11:07:45.232743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.582 [2024-11-20 11:07:45.232753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.582 [2024-11-20 11:07:45.232762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.582 [2024-11-20 11:07:45.232771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.582 [2024-11-20 11:07:45.232779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.582 [2024-11-20 11:07:45.232789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.582 [2024-11-20 11:07:45.232797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.582 [2024-11-20 11:07:45.232807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.582 [2024-11-20 11:07:45.232814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.582 [2024-11-20 11:07:45.232824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.582 [2024-11-20 11:07:45.232832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.582 [2024-11-20 11:07:45.232841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.582 [2024-11-20 11:07:45.232849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.582 [2024-11-20 11:07:45.232859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.582 [2024-11-20 11:07:45.232866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.582 [2024-11-20 11:07:45.232875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.582 [2024-11-20 11:07:45.232883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.582 [2024-11-20 11:07:45.232893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.582 [2024-11-20 11:07:45.232900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.582 [2024-11-20 11:07:45.232910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.582 [2024-11-20 11:07:45.232917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.582 [2024-11-20 11:07:45.232929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.582 [2024-11-20 11:07:45.232937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.582 [2024-11-20 11:07:45.232947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.582 [2024-11-20 11:07:45.232954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.582 [2024-11-20 11:07:45.232964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.582 [2024-11-20 11:07:45.232972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.582 [2024-11-20 11:07:45.232982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.582 [2024-11-20 11:07:45.232990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.583 [2024-11-20 11:07:45.233672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.583 [2024-11-20 11:07:45.233681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.584 [2024-11-20 11:07:45.233689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.584 [2024-11-20 11:07:45.233699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.584 [2024-11-20 11:07:45.233706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.584 [2024-11-20 11:07:45.233717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.584 [2024-11-20 11:07:45.233725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.584 [2024-11-20 11:07:45.233735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.584 [2024-11-20 11:07:45.233742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.584 [2024-11-20 11:07:45.233752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.584 [2024-11-20 11:07:45.233760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.584 [2024-11-20 11:07:45.233770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.584 [2024-11-20 11:07:45.233777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.584 [2024-11-20 11:07:45.233787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.584 [2024-11-20 11:07:45.233794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.584 [2024-11-20 11:07:45.233804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.584 [2024-11-20 11:07:45.233812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.584 [2024-11-20 11:07:45.233821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:52.584 [2024-11-20 11:07:45.233831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.584 [2024-11-20 11:07:45.233840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc51190 is same with the state(6) to be set 00:06:52.584 11:07:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.584 [2024-11-20 11:07:45.235153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:52.584 11:07:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:52.584 11:07:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.584 11:07:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:52.584 task offset: 98432 on job bdev=Nvme0n1 fails 00:06:52.584 00:06:52.584 Latency(us) 00:06:52.584 [2024-11-20T10:07:45.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:52.584 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:52.584 Job: Nvme0n1 ended in about 0.52 seconds with error 00:06:52.584 Verification LBA range: start 0x0 length 0x400 00:06:52.584 Nvme0n1 : 0.52 1369.03 85.56 123.41 0.00 41763.62 4341.76 36918.61 00:06:52.584 [2024-11-20T10:07:45.326Z] =================================================================================================================== 00:06:52.584 [2024-11-20T10:07:45.326Z] Total : 1369.03 85.56 123.41 0.00 41763.62 4341.76 36918.61 00:06:52.584 [2024-11-20 11:07:45.237401] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.584 [2024-11-20 11:07:45.237443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa38000 (9): Bad file descriptor 00:06:52.584 [2024-11-20 11:07:45.239251] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:06:52.584 [2024-11-20 11:07:45.239356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:06:52.584 [2024-11-20 11:07:45.239391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:52.584 [2024-11-20 11:07:45.239410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:06:52.584 [2024-11-20 11:07:45.239418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:06:52.584 [2024-11-20 11:07:45.239427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:06:52.584 [2024-11-20 11:07:45.239434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa38000 00:06:52.584 [2024-11-20 11:07:45.239457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa38000 (9): Bad file descriptor 00:06:52.584 [2024-11-20 11:07:45.239471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:06:52.584 [2024-11-20 11:07:45.239479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:06:52.584 [2024-11-20 11:07:45.239490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:06:52.584 [2024-11-20 11:07:45.239501] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:06:52.584 11:07:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.584 11:07:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:53.526 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2537254 00:06:53.526 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2537254) - No such process 00:06:53.526 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:53.526 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:53.526 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:53.526 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:53.526 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:53.526 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:53.526 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:53.526 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:53.526 { 00:06:53.526 "params": { 00:06:53.526 "name": "Nvme$subsystem", 00:06:53.526 "trtype": "$TEST_TRANSPORT", 00:06:53.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:53.526 "adrfam": "ipv4", 00:06:53.526 "trsvcid": "$NVMF_PORT", 00:06:53.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:53.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:53.526 "hdgst": ${hdgst:-false}, 00:06:53.526 "ddgst": ${ddgst:-false} 00:06:53.526 }, 00:06:53.526 "method": "bdev_nvme_attach_controller" 00:06:53.526 } 00:06:53.526 EOF 00:06:53.526 )") 00:06:53.526 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:53.787 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:53.787 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:53.787 11:07:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:53.787 "params": { 00:06:53.787 "name": "Nvme0", 00:06:53.787 "trtype": "tcp", 00:06:53.787 "traddr": "10.0.0.2", 00:06:53.787 "adrfam": "ipv4", 00:06:53.787 "trsvcid": "4420", 00:06:53.787 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:53.787 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:53.787 "hdgst": false, 00:06:53.787 "ddgst": false 00:06:53.787 }, 00:06:53.787 "method": "bdev_nvme_attach_controller" 00:06:53.787 }' 00:06:53.787 [2024-11-20 11:07:46.307216] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:06:53.787 [2024-11-20 11:07:46.307273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2537608 ] 00:06:53.787 [2024-11-20 11:07:46.398581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.787 [2024-11-20 11:07:46.433341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.048 Running I/O for 1 seconds... 00:06:54.989 1681.00 IOPS, 105.06 MiB/s 00:06:54.989 Latency(us) 00:06:54.989 [2024-11-20T10:07:47.731Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:54.989 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:54.989 Verification LBA range: start 0x0 length 0x400 00:06:54.989 Nvme0n1 : 1.01 1726.10 107.88 0.00 0.00 36391.88 682.67 34078.72 00:06:54.989 [2024-11-20T10:07:47.731Z] =================================================================================================================== 00:06:54.989 [2024-11-20T10:07:47.731Z] Total : 1726.10 107.88 0.00 0.00 36391.88 682.67 34078.72 00:06:55.249 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:55.249 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:55.249 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:55.249 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:55.249 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:55.249 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:55.249 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:55.249 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:55.249 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:55.249 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:55.249 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:55.249 rmmod nvme_tcp 00:06:55.249 rmmod nvme_fabrics 00:06:55.249 rmmod nvme_keyring 00:06:55.249 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:55.249 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:55.249 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:55.249 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2536882 ']' 00:06:55.249 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2536882 00:06:55.249 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2536882 ']' 00:06:55.249 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2536882 00:06:55.249 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:55.249 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.249 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2536882 00:06:55.249 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:55.249 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:55.249 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2536882' 00:06:55.249 killing process with pid 2536882 00:06:55.249 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2536882 00:06:55.249 11:07:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2536882 00:06:55.249 [2024-11-20 11:07:47.986604] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:55.510 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:55.510 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:55.510 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:55.510 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:55.510 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:55.510 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:55.510 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:55.510 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:55.510 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:55.510 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.510 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:55.510 11:07:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.425 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:57.425 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:57.425 00:06:57.425 real 0m14.681s 00:06:57.425 user 0m23.030s 00:06:57.425 sys 0m6.884s 00:06:57.425 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.425 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:57.425 ************************************ 00:06:57.425 END TEST nvmf_host_management 00:06:57.425 ************************************ 00:06:57.425 11:07:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:57.425 11:07:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:57.425 11:07:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.425 11:07:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:57.687 ************************************ 00:06:57.687 START TEST nvmf_lvol 00:06:57.687 ************************************ 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:57.687 * Looking for test storage... 00:06:57.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:57.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.687 --rc genhtml_branch_coverage=1 00:06:57.687 --rc genhtml_function_coverage=1 00:06:57.687 --rc genhtml_legend=1 00:06:57.687 --rc geninfo_all_blocks=1 00:06:57.687 --rc geninfo_unexecuted_blocks=1 00:06:57.687 00:06:57.687 ' 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:57.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.687 --rc genhtml_branch_coverage=1 00:06:57.687 --rc genhtml_function_coverage=1 00:06:57.687 --rc genhtml_legend=1 00:06:57.687 --rc geninfo_all_blocks=1 00:06:57.687 --rc geninfo_unexecuted_blocks=1 00:06:57.687 00:06:57.687 ' 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:57.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.687 --rc genhtml_branch_coverage=1 00:06:57.687 --rc genhtml_function_coverage=1 00:06:57.687 --rc genhtml_legend=1 00:06:57.687 --rc geninfo_all_blocks=1 00:06:57.687 --rc geninfo_unexecuted_blocks=1 00:06:57.687 00:06:57.687 ' 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:57.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.687 --rc genhtml_branch_coverage=1 00:06:57.687 --rc genhtml_function_coverage=1 00:06:57.687 --rc genhtml_legend=1 00:06:57.687 --rc geninfo_all_blocks=1 00:06:57.687 --rc geninfo_unexecuted_blocks=1 00:06:57.687 00:06:57.687 ' 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.687 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.688 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.688 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:57.688 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.688 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:57.688 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:57.688 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:57.688 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:57.688 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:57.688 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:57.688 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:57.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:57.688 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:57.688 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:57.688 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:57.688 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:57.688 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:57.688 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:57.688 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:57.688 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:57.688 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:57.688 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:57.688 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:57.688 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:57.688 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:57.688 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:57.688 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.688 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:57.688 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.949 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:57.949 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:57.949 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:57.949 11:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:06.093 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:06.093 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:06.093 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:06.093 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:06.093 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:06.093 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:06.093 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:06.093 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:06.093 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:06.093 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:06.093 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:06.093 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:06.093 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:06.093 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:06.093 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:06.093 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:06.093 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:06.093 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:06.093 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:06.093 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:06.093 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:06.093 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:06.093 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:06.093 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:06.093 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:06.093 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:06.094 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:06.094 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:06.094 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:06.094 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:06.094 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:06.094 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:07:06.094 00:07:06.094 --- 10.0.0.2 ping statistics --- 00:07:06.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.094 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:06.094 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:06.094 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:07:06.094 00:07:06.094 --- 10.0.0.1 ping statistics --- 00:07:06.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.094 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2542281 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2542281 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2542281 ']' 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.094 11:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:06.094 [2024-11-20 11:07:58.029959] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:07:06.094 [2024-11-20 11:07:58.030024] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.094 [2024-11-20 11:07:58.127226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:06.094 [2024-11-20 11:07:58.179172] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:06.094 [2024-11-20 11:07:58.179225] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:06.094 [2024-11-20 11:07:58.179234] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:06.094 [2024-11-20 11:07:58.179241] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:06.095 [2024-11-20 11:07:58.179248] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:06.095 [2024-11-20 11:07:58.181099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.095 [2024-11-20 11:07:58.181266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.095 [2024-11-20 11:07:58.181453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.356 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.356 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:06.356 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:06.356 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:06.356 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:06.356 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:06.356 11:07:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:06.356 [2024-11-20 11:07:59.073573] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.616 11:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:06.616 11:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:06.616 11:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:06.875 11:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:06.875 11:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:07.137 11:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:07.398 11:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b5c0721e-74ef-4420-a2f3-a70c6cbc0b1b 00:07:07.398 11:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b5c0721e-74ef-4420-a2f3-a70c6cbc0b1b lvol 20 00:07:07.659 11:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f572356e-aa21-4a5b-86e7-cfe61879e294 00:07:07.659 11:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:07.921 11:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f572356e-aa21-4a5b-86e7-cfe61879e294 00:07:07.921 11:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:08.183 [2024-11-20 11:08:00.740722] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:08.183 11:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:08.445 11:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:08.445 11:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2542816 00:07:08.445 11:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:09.387 11:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f572356e-aa21-4a5b-86e7-cfe61879e294 MY_SNAPSHOT 00:07:09.647 11:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=0667c5bd-e025-43cf-9e0f-a333b267782f 00:07:09.647 11:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f572356e-aa21-4a5b-86e7-cfe61879e294 30 00:07:09.908 11:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 0667c5bd-e025-43cf-9e0f-a333b267782f MY_CLONE 00:07:09.908 11:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=9443c895-3052-466a-9555-9edd6129c254 00:07:09.908 11:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 9443c895-3052-466a-9555-9edd6129c254 00:07:10.480 11:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2542816 00:07:18.797 Initializing NVMe Controllers 00:07:18.797 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:18.797 Controller IO queue size 128, less than required. 00:07:18.797 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:18.797 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:18.797 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:18.797 Initialization complete. Launching workers. 00:07:18.797 ======================================================== 00:07:18.797 Latency(us) 00:07:18.797 Device Information : IOPS MiB/s Average min max 00:07:18.797 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15990.80 62.46 8004.54 1634.65 38547.39 00:07:18.797 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17223.30 67.28 7433.04 713.71 58760.95 00:07:18.797 ======================================================== 00:07:18.797 Total : 33214.10 129.74 7708.18 713.71 58760.95 00:07:18.797 00:07:18.797 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:18.797 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f572356e-aa21-4a5b-86e7-cfe61879e294 00:07:19.058 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b5c0721e-74ef-4420-a2f3-a70c6cbc0b1b 00:07:19.317 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:19.317 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:19.317 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:19.317 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:19.317 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:19.317 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:19.317 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:19.317 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:19.317 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:19.317 rmmod nvme_tcp 00:07:19.317 rmmod nvme_fabrics 00:07:19.317 rmmod nvme_keyring 00:07:19.317 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:19.317 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:19.317 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:19.317 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2542281 ']' 00:07:19.318 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2542281 00:07:19.318 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2542281 ']' 00:07:19.318 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2542281 00:07:19.318 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:19.318 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.318 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2542281 00:07:19.318 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:19.318 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:19.318 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2542281' 00:07:19.318 killing process with pid 2542281 00:07:19.318 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2542281 00:07:19.318 11:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2542281 00:07:19.578 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:19.578 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:19.578 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:19.578 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:19.578 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:19.578 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:19.578 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:19.578 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:19.578 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:19.578 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.578 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:19.578 11:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.490 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:21.490 00:07:21.490 real 0m23.978s 00:07:21.490 user 1m4.839s 00:07:21.490 sys 0m8.608s 00:07:21.490 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.490 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:21.490 ************************************ 00:07:21.490 END TEST nvmf_lvol 00:07:21.490 ************************************ 00:07:21.490 11:08:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:21.490 11:08:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:21.490 11:08:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.490 11:08:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:21.751 ************************************ 00:07:21.751 START TEST nvmf_lvs_grow 00:07:21.751 ************************************ 00:07:21.751 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:21.751 * Looking for test storage... 00:07:21.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:21.751 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:21.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.752 --rc genhtml_branch_coverage=1 00:07:21.752 --rc genhtml_function_coverage=1 00:07:21.752 --rc genhtml_legend=1 00:07:21.752 --rc geninfo_all_blocks=1 00:07:21.752 --rc geninfo_unexecuted_blocks=1 00:07:21.752 00:07:21.752 ' 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:21.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.752 --rc genhtml_branch_coverage=1 00:07:21.752 --rc genhtml_function_coverage=1 00:07:21.752 --rc genhtml_legend=1 00:07:21.752 --rc geninfo_all_blocks=1 00:07:21.752 --rc geninfo_unexecuted_blocks=1 00:07:21.752 00:07:21.752 ' 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:21.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.752 --rc genhtml_branch_coverage=1 00:07:21.752 --rc genhtml_function_coverage=1 00:07:21.752 --rc genhtml_legend=1 00:07:21.752 --rc geninfo_all_blocks=1 00:07:21.752 --rc geninfo_unexecuted_blocks=1 00:07:21.752 00:07:21.752 ' 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:21.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.752 --rc genhtml_branch_coverage=1 00:07:21.752 --rc genhtml_function_coverage=1 00:07:21.752 --rc genhtml_legend=1 00:07:21.752 --rc geninfo_all_blocks=1 00:07:21.752 --rc geninfo_unexecuted_blocks=1 00:07:21.752 00:07:21.752 ' 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:21.752 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:21.752 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:21.753 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:21.753 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:21.753 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:21.753 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:21.753 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:21.753 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.753 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:21.753 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.753 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:21.753 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:21.753 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:21.753 11:08:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:29.896 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:29.896 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:29.896 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:29.896 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:29.896 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:29.896 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:29.896 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:29.896 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:29.896 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:29.896 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:29.896 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:29.896 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:29.896 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:29.896 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:29.896 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:29.896 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:29.896 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:29.896 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:29.896 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:29.896 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:29.896 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:29.896 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:29.896 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:29.896 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:29.896 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:29.896 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:29.896 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:29.896 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:29.896 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:29.897 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:29.897 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:29.897 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:29.897 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:29.897 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:29.897 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:07:29.897 00:07:29.897 --- 10.0.0.2 ping statistics --- 00:07:29.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.897 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:29.897 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:29.897 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:07:29.897 00:07:29.897 --- 10.0.0.1 ping statistics --- 00:07:29.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.897 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:29.897 11:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:29.897 11:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:29.898 11:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:29.898 11:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:29.898 11:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:29.898 11:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2549360 00:07:29.898 11:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2549360 00:07:29.898 11:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:29.898 11:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2549360 ']' 00:07:29.898 11:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.898 11:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.898 11:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.898 11:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.898 11:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:29.898 [2024-11-20 11:08:22.082090] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:07:29.898 [2024-11-20 11:08:22.082154] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:29.898 [2024-11-20 11:08:22.181758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.898 [2024-11-20 11:08:22.232548] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:29.898 [2024-11-20 11:08:22.232600] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:29.898 [2024-11-20 11:08:22.232609] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:29.898 [2024-11-20 11:08:22.232616] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:29.898 [2024-11-20 11:08:22.232623] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:29.898 [2024-11-20 11:08:22.233399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.471 11:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.471 11:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:30.471 11:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:30.471 11:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:30.471 11:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:30.471 11:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:30.471 11:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:30.471 [2024-11-20 11:08:23.111633] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.471 11:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:30.471 11:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:30.471 11:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.471 11:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:30.471 ************************************ 00:07:30.471 START TEST lvs_grow_clean 00:07:30.471 ************************************ 00:07:30.471 11:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:30.471 11:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:30.471 11:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:30.471 11:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:30.471 11:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:30.471 11:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:30.471 11:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:30.471 11:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:30.471 11:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:30.471 11:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:30.733 11:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:30.733 11:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:30.992 11:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=59ccd8d7-1854-4a70-968f-06d004c630bc 00:07:30.992 11:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59ccd8d7-1854-4a70-968f-06d004c630bc 00:07:30.992 11:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:31.253 11:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:31.253 11:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:31.253 11:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 59ccd8d7-1854-4a70-968f-06d004c630bc lvol 150 00:07:31.513 11:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=e60bd704-514b-40bc-a260-0b5989409253 00:07:31.513 11:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:31.513 11:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:31.513 [2024-11-20 11:08:24.161786] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:31.513 [2024-11-20 11:08:24.161861] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:31.513 true 00:07:31.513 11:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59ccd8d7-1854-4a70-968f-06d004c630bc 00:07:31.513 11:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:31.774 11:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:31.774 11:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:32.035 11:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e60bd704-514b-40bc-a260-0b5989409253 00:07:32.035 11:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:32.297 [2024-11-20 11:08:24.904217] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:32.297 11:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:32.558 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2549965 00:07:32.558 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:32.558 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:32.558 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2549965 /var/tmp/bdevperf.sock 00:07:32.558 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2549965 ']' 00:07:32.558 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:32.558 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.558 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:32.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:32.558 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.558 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:32.558 [2024-11-20 11:08:25.148542] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:07:32.558 [2024-11-20 11:08:25.148611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2549965 ] 00:07:32.558 [2024-11-20 11:08:25.239219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.558 [2024-11-20 11:08:25.291455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.500 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.500 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:33.500 11:08:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:33.760 Nvme0n1 00:07:33.760 11:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:34.021 [ 00:07:34.021 { 00:07:34.021 "name": "Nvme0n1", 00:07:34.021 "aliases": [ 00:07:34.021 "e60bd704-514b-40bc-a260-0b5989409253" 00:07:34.021 ], 00:07:34.021 "product_name": "NVMe disk", 00:07:34.021 "block_size": 4096, 00:07:34.021 "num_blocks": 38912, 00:07:34.021 "uuid": "e60bd704-514b-40bc-a260-0b5989409253", 00:07:34.021 "numa_id": 0, 00:07:34.021 "assigned_rate_limits": { 00:07:34.021 "rw_ios_per_sec": 0, 00:07:34.021 "rw_mbytes_per_sec": 0, 00:07:34.021 "r_mbytes_per_sec": 0, 00:07:34.021 "w_mbytes_per_sec": 0 00:07:34.021 }, 00:07:34.021 "claimed": false, 00:07:34.021 "zoned": false, 00:07:34.021 "supported_io_types": { 00:07:34.021 "read": true, 00:07:34.021 "write": true, 00:07:34.021 "unmap": true, 00:07:34.021 "flush": true, 00:07:34.021 "reset": true, 00:07:34.021 "nvme_admin": true, 00:07:34.021 "nvme_io": true, 00:07:34.021 "nvme_io_md": false, 00:07:34.021 "write_zeroes": true, 00:07:34.021 "zcopy": false, 00:07:34.021 "get_zone_info": false, 00:07:34.021 "zone_management": false, 00:07:34.021 "zone_append": false, 00:07:34.021 "compare": true, 00:07:34.021 "compare_and_write": true, 00:07:34.021 "abort": true, 00:07:34.021 "seek_hole": false, 00:07:34.021 "seek_data": false, 00:07:34.021 "copy": true, 00:07:34.021 "nvme_iov_md": false 00:07:34.021 }, 00:07:34.021 "memory_domains": [ 00:07:34.021 { 00:07:34.021 "dma_device_id": "system", 00:07:34.021 "dma_device_type": 1 00:07:34.021 } 00:07:34.021 ], 00:07:34.021 "driver_specific": { 00:07:34.021 "nvme": [ 00:07:34.021 { 00:07:34.021 "trid": { 00:07:34.021 "trtype": "TCP", 00:07:34.021 "adrfam": "IPv4", 00:07:34.021 "traddr": "10.0.0.2", 00:07:34.021 "trsvcid": "4420", 00:07:34.021 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:34.021 }, 00:07:34.021 "ctrlr_data": { 00:07:34.021 "cntlid": 1, 00:07:34.021 "vendor_id": "0x8086", 00:07:34.021 "model_number": "SPDK bdev Controller", 00:07:34.021 "serial_number": "SPDK0", 00:07:34.021 "firmware_revision": "25.01", 00:07:34.021 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:34.021 "oacs": { 00:07:34.021 "security": 0, 00:07:34.021 "format": 0, 00:07:34.021 "firmware": 0, 00:07:34.021 "ns_manage": 0 00:07:34.021 }, 00:07:34.021 "multi_ctrlr": true, 00:07:34.021 "ana_reporting": false 00:07:34.021 }, 00:07:34.021 "vs": { 00:07:34.021 "nvme_version": "1.3" 00:07:34.021 }, 00:07:34.021 "ns_data": { 00:07:34.021 "id": 1, 00:07:34.021 "can_share": true 00:07:34.021 } 00:07:34.021 } 00:07:34.021 ], 00:07:34.021 "mp_policy": "active_passive" 00:07:34.021 } 00:07:34.021 } 00:07:34.021 ] 00:07:34.021 11:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2550198 00:07:34.021 11:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:34.021 11:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:34.021 Running I/O for 10 seconds... 00:07:34.962 Latency(us) 00:07:34.962 [2024-11-20T10:08:27.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.962 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.962 Nvme0n1 : 1.00 24756.00 96.70 0.00 0.00 0.00 0.00 0.00 00:07:34.962 [2024-11-20T10:08:27.704Z] =================================================================================================================== 00:07:34.962 [2024-11-20T10:08:27.704Z] Total : 24756.00 96.70 0.00 0.00 0.00 0.00 0.00 00:07:34.962 00:07:35.902 11:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 59ccd8d7-1854-4a70-968f-06d004c630bc 00:07:36.162 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.162 Nvme0n1 : 2.00 24855.50 97.09 0.00 0.00 0.00 0.00 0.00 00:07:36.162 [2024-11-20T10:08:28.904Z] =================================================================================================================== 00:07:36.162 [2024-11-20T10:08:28.904Z] Total : 24855.50 97.09 0.00 0.00 0.00 0.00 0.00 00:07:36.162 00:07:36.162 true 00:07:36.162 11:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59ccd8d7-1854-4a70-968f-06d004c630bc 00:07:36.162 11:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:36.422 11:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:36.422 11:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:36.422 11:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2550198 00:07:36.990 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.990 Nvme0n1 : 3.00 24888.00 97.22 0.00 0.00 0.00 0.00 0.00 00:07:36.990 [2024-11-20T10:08:29.732Z] =================================================================================================================== 00:07:36.990 [2024-11-20T10:08:29.732Z] Total : 24888.00 97.22 0.00 0.00 0.00 0.00 0.00 00:07:36.990 00:07:38.371 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.371 Nvme0n1 : 4.00 24961.25 97.50 0.00 0.00 0.00 0.00 0.00 00:07:38.371 [2024-11-20T10:08:31.113Z] =================================================================================================================== 00:07:38.371 [2024-11-20T10:08:31.113Z] Total : 24961.25 97.50 0.00 0.00 0.00 0.00 0.00 00:07:38.371 00:07:39.311 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.311 Nvme0n1 : 5.00 25008.40 97.69 0.00 0.00 0.00 0.00 0.00 00:07:39.311 [2024-11-20T10:08:32.053Z] =================================================================================================================== 00:07:39.311 [2024-11-20T10:08:32.053Z] Total : 25008.40 97.69 0.00 0.00 0.00 0.00 0.00 00:07:39.311 00:07:40.251 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.251 Nvme0n1 : 6.00 25035.00 97.79 0.00 0.00 0.00 0.00 0.00 00:07:40.251 [2024-11-20T10:08:32.993Z] =================================================================================================================== 00:07:40.251 [2024-11-20T10:08:32.993Z] Total : 25035.00 97.79 0.00 0.00 0.00 0.00 0.00 00:07:40.251 00:07:41.190 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.190 Nvme0n1 : 7.00 25063.43 97.90 0.00 0.00 0.00 0.00 0.00 00:07:41.190 [2024-11-20T10:08:33.932Z] =================================================================================================================== 00:07:41.190 [2024-11-20T10:08:33.932Z] Total : 25063.43 97.90 0.00 0.00 0.00 0.00 0.00 00:07:41.190 00:07:42.131 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.131 Nvme0n1 : 8.00 25084.12 97.98 0.00 0.00 0.00 0.00 0.00 00:07:42.131 [2024-11-20T10:08:34.873Z] =================================================================================================================== 00:07:42.131 [2024-11-20T10:08:34.873Z] Total : 25084.12 97.98 0.00 0.00 0.00 0.00 0.00 00:07:42.131 00:07:43.072 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.072 Nvme0n1 : 9.00 25093.33 98.02 0.00 0.00 0.00 0.00 0.00 00:07:43.072 [2024-11-20T10:08:35.814Z] =================================================================================================================== 00:07:43.072 [2024-11-20T10:08:35.814Z] Total : 25093.33 98.02 0.00 0.00 0.00 0.00 0.00 00:07:43.072 00:07:44.011 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.011 Nvme0n1 : 10.00 25113.30 98.10 0.00 0.00 0.00 0.00 0.00 00:07:44.011 [2024-11-20T10:08:36.753Z] =================================================================================================================== 00:07:44.011 [2024-11-20T10:08:36.753Z] Total : 25113.30 98.10 0.00 0.00 0.00 0.00 0.00 00:07:44.011 00:07:44.011 00:07:44.011 Latency(us) 00:07:44.011 [2024-11-20T10:08:36.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:44.011 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.011 Nvme0n1 : 10.00 25109.28 98.08 0.00 0.00 5094.17 1979.73 9994.24 00:07:44.011 [2024-11-20T10:08:36.753Z] =================================================================================================================== 00:07:44.011 [2024-11-20T10:08:36.753Z] Total : 25109.28 98.08 0.00 0.00 5094.17 1979.73 9994.24 00:07:44.011 { 00:07:44.011 "results": [ 00:07:44.011 { 00:07:44.011 "job": "Nvme0n1", 00:07:44.011 "core_mask": "0x2", 00:07:44.011 "workload": "randwrite", 00:07:44.011 "status": "finished", 00:07:44.011 "queue_depth": 128, 00:07:44.011 "io_size": 4096, 00:07:44.011 "runtime": 10.004109, 00:07:44.011 "iops": 25109.282595781395, 00:07:44.011 "mibps": 98.08313513977107, 00:07:44.011 "io_failed": 0, 00:07:44.011 "io_timeout": 0, 00:07:44.011 "avg_latency_us": 5094.172803176271, 00:07:44.011 "min_latency_us": 1979.7333333333333, 00:07:44.011 "max_latency_us": 9994.24 00:07:44.011 } 00:07:44.011 ], 00:07:44.011 "core_count": 1 00:07:44.011 } 00:07:44.011 11:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2549965 00:07:44.011 11:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2549965 ']' 00:07:44.011 11:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2549965 00:07:44.011 11:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:44.272 11:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:44.272 11:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2549965 00:07:44.272 11:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:44.272 11:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:44.272 11:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2549965' 00:07:44.272 killing process with pid 2549965 00:07:44.272 11:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2549965 00:07:44.272 Received shutdown signal, test time was about 10.000000 seconds 00:07:44.272 00:07:44.272 Latency(us) 00:07:44.272 [2024-11-20T10:08:37.014Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:44.272 [2024-11-20T10:08:37.014Z] =================================================================================================================== 00:07:44.272 [2024-11-20T10:08:37.014Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:44.272 11:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2549965 00:07:44.272 11:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:44.532 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:44.793 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59ccd8d7-1854-4a70-968f-06d004c630bc 00:07:44.793 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:44.793 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:44.793 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:44.793 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:45.054 [2024-11-20 11:08:37.597689] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:45.054 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59ccd8d7-1854-4a70-968f-06d004c630bc 00:07:45.054 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:45.054 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59ccd8d7-1854-4a70-968f-06d004c630bc 00:07:45.054 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.054 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:45.054 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.054 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:45.054 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.054 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:45.054 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.054 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:45.054 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59ccd8d7-1854-4a70-968f-06d004c630bc 00:07:45.054 request: 00:07:45.054 { 00:07:45.054 "uuid": "59ccd8d7-1854-4a70-968f-06d004c630bc", 00:07:45.054 "method": "bdev_lvol_get_lvstores", 00:07:45.054 "req_id": 1 00:07:45.054 } 00:07:45.054 Got JSON-RPC error response 00:07:45.054 response: 00:07:45.054 { 00:07:45.054 "code": -19, 00:07:45.054 "message": "No such device" 00:07:45.054 } 00:07:45.314 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:45.314 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:45.315 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:45.315 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:45.315 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:45.315 aio_bdev 00:07:45.315 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e60bd704-514b-40bc-a260-0b5989409253 00:07:45.315 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=e60bd704-514b-40bc-a260-0b5989409253 00:07:45.315 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:45.315 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:45.315 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:45.315 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:45.315 11:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:45.576 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e60bd704-514b-40bc-a260-0b5989409253 -t 2000 00:07:45.576 [ 00:07:45.576 { 00:07:45.576 "name": "e60bd704-514b-40bc-a260-0b5989409253", 00:07:45.576 "aliases": [ 00:07:45.576 "lvs/lvol" 00:07:45.576 ], 00:07:45.576 "product_name": "Logical Volume", 00:07:45.576 "block_size": 4096, 00:07:45.576 "num_blocks": 38912, 00:07:45.576 "uuid": "e60bd704-514b-40bc-a260-0b5989409253", 00:07:45.576 "assigned_rate_limits": { 00:07:45.576 "rw_ios_per_sec": 0, 00:07:45.576 "rw_mbytes_per_sec": 0, 00:07:45.576 "r_mbytes_per_sec": 0, 00:07:45.576 "w_mbytes_per_sec": 0 00:07:45.576 }, 00:07:45.576 "claimed": false, 00:07:45.576 "zoned": false, 00:07:45.576 "supported_io_types": { 00:07:45.576 "read": true, 00:07:45.576 "write": true, 00:07:45.576 "unmap": true, 00:07:45.576 "flush": false, 00:07:45.576 "reset": true, 00:07:45.576 "nvme_admin": false, 00:07:45.576 "nvme_io": false, 00:07:45.576 "nvme_io_md": false, 00:07:45.576 "write_zeroes": true, 00:07:45.576 "zcopy": false, 00:07:45.576 "get_zone_info": false, 00:07:45.576 "zone_management": false, 00:07:45.576 "zone_append": false, 00:07:45.576 "compare": false, 00:07:45.576 "compare_and_write": false, 00:07:45.576 "abort": false, 00:07:45.576 "seek_hole": true, 00:07:45.576 "seek_data": true, 00:07:45.576 "copy": false, 00:07:45.576 "nvme_iov_md": false 00:07:45.576 }, 00:07:45.576 "driver_specific": { 00:07:45.576 "lvol": { 00:07:45.576 "lvol_store_uuid": "59ccd8d7-1854-4a70-968f-06d004c630bc", 00:07:45.576 "base_bdev": "aio_bdev", 00:07:45.576 "thin_provision": false, 00:07:45.576 "num_allocated_clusters": 38, 00:07:45.576 "snapshot": false, 00:07:45.576 "clone": false, 00:07:45.576 "esnap_clone": false 00:07:45.576 } 00:07:45.576 } 00:07:45.576 } 00:07:45.576 ] 00:07:45.576 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:45.576 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:45.576 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59ccd8d7-1854-4a70-968f-06d004c630bc 00:07:45.836 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:45.836 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59ccd8d7-1854-4a70-968f-06d004c630bc 00:07:45.836 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:46.096 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:46.096 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e60bd704-514b-40bc-a260-0b5989409253 00:07:46.096 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 59ccd8d7-1854-4a70-968f-06d004c630bc 00:07:46.355 11:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:46.615 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:46.615 00:07:46.615 real 0m16.006s 00:07:46.615 user 0m15.757s 00:07:46.615 sys 0m1.426s 00:07:46.615 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.615 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:46.615 ************************************ 00:07:46.615 END TEST lvs_grow_clean 00:07:46.615 ************************************ 00:07:46.615 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:46.615 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:46.615 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.615 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:46.615 ************************************ 00:07:46.615 START TEST lvs_grow_dirty 00:07:46.615 ************************************ 00:07:46.615 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:46.615 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:46.615 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:46.615 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:46.615 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:46.615 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:46.615 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:46.615 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:46.615 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:46.615 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:46.876 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:46.876 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:47.137 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=c3a70738-3502-4e42-aa32-0b873def8000 00:07:47.137 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3a70738-3502-4e42-aa32-0b873def8000 00:07:47.137 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:47.137 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:47.137 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:47.137 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c3a70738-3502-4e42-aa32-0b873def8000 lvol 150 00:07:47.397 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b00ab326-8daf-4d55-a6ab-a9600dcdc707 00:07:47.397 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:47.397 11:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:47.397 [2024-11-20 11:08:40.118759] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:47.397 [2024-11-20 11:08:40.118806] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:47.397 true 00:07:47.397 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:47.657 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3a70738-3502-4e42-aa32-0b873def8000 00:07:47.657 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:47.657 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:47.917 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b00ab326-8daf-4d55-a6ab-a9600dcdc707 00:07:47.917 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:48.179 [2024-11-20 11:08:40.796717] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:48.179 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:48.439 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2553184 00:07:48.439 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:48.439 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:48.439 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2553184 /var/tmp/bdevperf.sock 00:07:48.439 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2553184 ']' 00:07:48.439 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:48.439 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.439 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:48.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:48.439 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.439 11:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:48.439 [2024-11-20 11:08:41.029496] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:07:48.439 [2024-11-20 11:08:41.029547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2553184 ] 00:07:48.439 [2024-11-20 11:08:41.110815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.439 [2024-11-20 11:08:41.140484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.376 11:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:49.376 11:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:49.376 11:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:49.376 Nvme0n1 00:07:49.376 11:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:49.637 [ 00:07:49.637 { 00:07:49.637 "name": "Nvme0n1", 00:07:49.637 "aliases": [ 00:07:49.637 "b00ab326-8daf-4d55-a6ab-a9600dcdc707" 00:07:49.637 ], 00:07:49.637 "product_name": "NVMe disk", 00:07:49.637 "block_size": 4096, 00:07:49.637 "num_blocks": 38912, 00:07:49.637 "uuid": "b00ab326-8daf-4d55-a6ab-a9600dcdc707", 00:07:49.637 "numa_id": 0, 00:07:49.638 "assigned_rate_limits": { 00:07:49.638 "rw_ios_per_sec": 0, 00:07:49.638 "rw_mbytes_per_sec": 0, 00:07:49.638 "r_mbytes_per_sec": 0, 00:07:49.638 "w_mbytes_per_sec": 0 00:07:49.638 }, 00:07:49.638 "claimed": false, 00:07:49.638 "zoned": false, 00:07:49.638 "supported_io_types": { 00:07:49.638 "read": true, 00:07:49.638 "write": true, 00:07:49.638 "unmap": true, 00:07:49.638 "flush": true, 00:07:49.638 "reset": true, 00:07:49.638 "nvme_admin": true, 00:07:49.638 "nvme_io": true, 00:07:49.638 "nvme_io_md": false, 00:07:49.638 "write_zeroes": true, 00:07:49.638 "zcopy": false, 00:07:49.638 "get_zone_info": false, 00:07:49.638 "zone_management": false, 00:07:49.638 "zone_append": false, 00:07:49.638 "compare": true, 00:07:49.638 "compare_and_write": true, 00:07:49.638 "abort": true, 00:07:49.638 "seek_hole": false, 00:07:49.638 "seek_data": false, 00:07:49.638 "copy": true, 00:07:49.638 "nvme_iov_md": false 00:07:49.638 }, 00:07:49.638 "memory_domains": [ 00:07:49.638 { 00:07:49.638 "dma_device_id": "system", 00:07:49.638 "dma_device_type": 1 00:07:49.638 } 00:07:49.638 ], 00:07:49.638 "driver_specific": { 00:07:49.638 "nvme": [ 00:07:49.638 { 00:07:49.638 "trid": { 00:07:49.638 "trtype": "TCP", 00:07:49.638 "adrfam": "IPv4", 00:07:49.638 "traddr": "10.0.0.2", 00:07:49.638 "trsvcid": "4420", 00:07:49.638 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:49.638 }, 00:07:49.638 "ctrlr_data": { 00:07:49.638 "cntlid": 1, 00:07:49.638 "vendor_id": "0x8086", 00:07:49.638 "model_number": "SPDK bdev Controller", 00:07:49.638 "serial_number": "SPDK0", 00:07:49.638 "firmware_revision": "25.01", 00:07:49.638 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:49.638 "oacs": { 00:07:49.638 "security": 0, 00:07:49.638 "format": 0, 00:07:49.638 "firmware": 0, 00:07:49.638 "ns_manage": 0 00:07:49.638 }, 00:07:49.638 "multi_ctrlr": true, 00:07:49.638 "ana_reporting": false 00:07:49.638 }, 00:07:49.638 "vs": { 00:07:49.638 "nvme_version": "1.3" 00:07:49.638 }, 00:07:49.638 "ns_data": { 00:07:49.638 "id": 1, 00:07:49.638 "can_share": true 00:07:49.638 } 00:07:49.638 } 00:07:49.638 ], 00:07:49.638 "mp_policy": "active_passive" 00:07:49.638 } 00:07:49.638 } 00:07:49.638 ] 00:07:49.638 11:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2553520 00:07:49.638 11:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:49.638 11:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:49.638 Running I/O for 10 seconds... 00:07:51.015 Latency(us) 00:07:51.015 [2024-11-20T10:08:43.757Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.015 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.015 Nvme0n1 : 1.00 24419.00 95.39 0.00 0.00 0.00 0.00 0.00 00:07:51.015 [2024-11-20T10:08:43.757Z] =================================================================================================================== 00:07:51.015 [2024-11-20T10:08:43.757Z] Total : 24419.00 95.39 0.00 0.00 0.00 0.00 0.00 00:07:51.015 00:07:51.585 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c3a70738-3502-4e42-aa32-0b873def8000 00:07:51.585 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.585 Nvme0n1 : 2.00 24489.50 95.66 0.00 0.00 0.00 0.00 0.00 00:07:51.585 [2024-11-20T10:08:44.327Z] =================================================================================================================== 00:07:51.585 [2024-11-20T10:08:44.327Z] Total : 24489.50 95.66 0.00 0.00 0.00 0.00 0.00 00:07:51.585 00:07:51.844 true 00:07:51.844 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3a70738-3502-4e42-aa32-0b873def8000 00:07:51.844 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:52.104 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:52.104 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:52.104 11:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2553520 00:07:52.675 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.675 Nvme0n1 : 3.00 24529.00 95.82 0.00 0.00 0.00 0.00 0.00 00:07:52.675 [2024-11-20T10:08:45.417Z] =================================================================================================================== 00:07:52.675 [2024-11-20T10:08:45.417Z] Total : 24529.00 95.82 0.00 0.00 0.00 0.00 0.00 00:07:52.675 00:07:53.615 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.615 Nvme0n1 : 4.00 24570.75 95.98 0.00 0.00 0.00 0.00 0.00 00:07:53.615 [2024-11-20T10:08:46.357Z] =================================================================================================================== 00:07:53.615 [2024-11-20T10:08:46.357Z] Total : 24570.75 95.98 0.00 0.00 0.00 0.00 0.00 00:07:53.615 00:07:54.997 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.997 Nvme0n1 : 5.00 24599.00 96.09 0.00 0.00 0.00 0.00 0.00 00:07:54.998 [2024-11-20T10:08:47.740Z] =================================================================================================================== 00:07:54.998 [2024-11-20T10:08:47.740Z] Total : 24599.00 96.09 0.00 0.00 0.00 0.00 0.00 00:07:54.998 00:07:55.938 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.938 Nvme0n1 : 6.00 24628.50 96.21 0.00 0.00 0.00 0.00 0.00 00:07:55.938 [2024-11-20T10:08:48.680Z] =================================================================================================================== 00:07:55.938 [2024-11-20T10:08:48.680Z] Total : 24628.50 96.21 0.00 0.00 0.00 0.00 0.00 00:07:55.938 00:07:56.878 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.878 Nvme0n1 : 7.00 24642.71 96.26 0.00 0.00 0.00 0.00 0.00 00:07:56.878 [2024-11-20T10:08:49.620Z] =================================================================================================================== 00:07:56.879 [2024-11-20T10:08:49.621Z] Total : 24642.71 96.26 0.00 0.00 0.00 0.00 0.00 00:07:56.879 00:07:57.817 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.817 Nvme0n1 : 8.00 24652.38 96.30 0.00 0.00 0.00 0.00 0.00 00:07:57.817 [2024-11-20T10:08:50.559Z] =================================================================================================================== 00:07:57.817 [2024-11-20T10:08:50.559Z] Total : 24652.38 96.30 0.00 0.00 0.00 0.00 0.00 00:07:57.817 00:07:58.756 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.756 Nvme0n1 : 9.00 24662.56 96.34 0.00 0.00 0.00 0.00 0.00 00:07:58.756 [2024-11-20T10:08:51.498Z] =================================================================================================================== 00:07:58.756 [2024-11-20T10:08:51.498Z] Total : 24662.56 96.34 0.00 0.00 0.00 0.00 0.00 00:07:58.756 00:07:59.697 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.697 Nvme0n1 : 10.00 24673.10 96.38 0.00 0.00 0.00 0.00 0.00 00:07:59.697 [2024-11-20T10:08:52.439Z] =================================================================================================================== 00:07:59.697 [2024-11-20T10:08:52.439Z] Total : 24673.10 96.38 0.00 0.00 0.00 0.00 0.00 00:07:59.697 00:07:59.697 00:07:59.697 Latency(us) 00:07:59.697 [2024-11-20T10:08:52.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.697 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.697 Nvme0n1 : 10.00 24672.89 96.38 0.00 0.00 5184.28 1713.49 7154.35 00:07:59.697 [2024-11-20T10:08:52.439Z] =================================================================================================================== 00:07:59.697 [2024-11-20T10:08:52.439Z] Total : 24672.89 96.38 0.00 0.00 5184.28 1713.49 7154.35 00:07:59.697 { 00:07:59.697 "results": [ 00:07:59.697 { 00:07:59.697 "job": "Nvme0n1", 00:07:59.697 "core_mask": "0x2", 00:07:59.697 "workload": "randwrite", 00:07:59.697 "status": "finished", 00:07:59.697 "queue_depth": 128, 00:07:59.697 "io_size": 4096, 00:07:59.697 "runtime": 10.004949, 00:07:59.697 "iops": 24672.889387042353, 00:07:59.697 "mibps": 96.37847416813419, 00:07:59.697 "io_failed": 0, 00:07:59.697 "io_timeout": 0, 00:07:59.697 "avg_latency_us": 5184.276505152231, 00:07:59.697 "min_latency_us": 1713.4933333333333, 00:07:59.697 "max_latency_us": 7154.346666666666 00:07:59.697 } 00:07:59.697 ], 00:07:59.697 "core_count": 1 00:07:59.697 } 00:07:59.697 11:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2553184 00:07:59.697 11:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2553184 ']' 00:07:59.697 11:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2553184 00:07:59.697 11:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:59.697 11:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.697 11:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2553184 00:07:59.697 11:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:59.957 11:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:59.957 11:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2553184' 00:07:59.957 killing process with pid 2553184 00:07:59.957 11:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2553184 00:07:59.957 Received shutdown signal, test time was about 10.000000 seconds 00:07:59.957 00:07:59.957 Latency(us) 00:07:59.957 [2024-11-20T10:08:52.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.957 [2024-11-20T10:08:52.699Z] =================================================================================================================== 00:07:59.957 [2024-11-20T10:08:52.699Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:59.957 11:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2553184 00:07:59.957 11:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:00.217 11:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:00.217 11:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3a70738-3502-4e42-aa32-0b873def8000 00:08:00.217 11:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:00.477 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:00.477 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:00.477 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2549360 00:08:00.477 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2549360 00:08:00.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2549360 Killed "${NVMF_APP[@]}" "$@" 00:08:00.477 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:00.477 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:00.477 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:00.477 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:00.477 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:00.477 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2555553 00:08:00.477 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2555553 00:08:00.477 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:00.477 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2555553 ']' 00:08:00.477 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.477 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.477 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.477 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.477 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:00.477 [2024-11-20 11:08:53.131471] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:08:00.477 [2024-11-20 11:08:53.131529] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.737 [2024-11-20 11:08:53.225776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.737 [2024-11-20 11:08:53.255549] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:00.737 [2024-11-20 11:08:53.255575] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:00.737 [2024-11-20 11:08:53.255580] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:00.737 [2024-11-20 11:08:53.255585] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:00.737 [2024-11-20 11:08:53.255589] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:00.737 [2024-11-20 11:08:53.256027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.307 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.307 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:01.307 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:01.307 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:01.307 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:01.307 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:01.307 11:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:01.567 [2024-11-20 11:08:54.113497] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:01.567 [2024-11-20 11:08:54.113571] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:01.567 [2024-11-20 11:08:54.113593] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:01.567 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:01.567 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b00ab326-8daf-4d55-a6ab-a9600dcdc707 00:08:01.567 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b00ab326-8daf-4d55-a6ab-a9600dcdc707 00:08:01.567 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:01.567 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:01.567 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:01.567 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:01.567 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:01.567 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b00ab326-8daf-4d55-a6ab-a9600dcdc707 -t 2000 00:08:01.828 [ 00:08:01.828 { 00:08:01.828 "name": "b00ab326-8daf-4d55-a6ab-a9600dcdc707", 00:08:01.828 "aliases": [ 00:08:01.828 "lvs/lvol" 00:08:01.828 ], 00:08:01.828 "product_name": "Logical Volume", 00:08:01.828 "block_size": 4096, 00:08:01.828 "num_blocks": 38912, 00:08:01.828 "uuid": "b00ab326-8daf-4d55-a6ab-a9600dcdc707", 00:08:01.828 "assigned_rate_limits": { 00:08:01.828 "rw_ios_per_sec": 0, 00:08:01.828 "rw_mbytes_per_sec": 0, 00:08:01.828 "r_mbytes_per_sec": 0, 00:08:01.828 "w_mbytes_per_sec": 0 00:08:01.828 }, 00:08:01.828 "claimed": false, 00:08:01.828 "zoned": false, 00:08:01.828 "supported_io_types": { 00:08:01.828 "read": true, 00:08:01.828 "write": true, 00:08:01.828 "unmap": true, 00:08:01.828 "flush": false, 00:08:01.828 "reset": true, 00:08:01.828 "nvme_admin": false, 00:08:01.828 "nvme_io": false, 00:08:01.828 "nvme_io_md": false, 00:08:01.828 "write_zeroes": true, 00:08:01.828 "zcopy": false, 00:08:01.828 "get_zone_info": false, 00:08:01.828 "zone_management": false, 00:08:01.828 "zone_append": false, 00:08:01.828 "compare": false, 00:08:01.828 "compare_and_write": false, 00:08:01.828 "abort": false, 00:08:01.828 "seek_hole": true, 00:08:01.828 "seek_data": true, 00:08:01.828 "copy": false, 00:08:01.828 "nvme_iov_md": false 00:08:01.828 }, 00:08:01.828 "driver_specific": { 00:08:01.828 "lvol": { 00:08:01.828 "lvol_store_uuid": "c3a70738-3502-4e42-aa32-0b873def8000", 00:08:01.828 "base_bdev": "aio_bdev", 00:08:01.828 "thin_provision": false, 00:08:01.828 "num_allocated_clusters": 38, 00:08:01.828 "snapshot": false, 00:08:01.828 "clone": false, 00:08:01.828 "esnap_clone": false 00:08:01.828 } 00:08:01.828 } 00:08:01.828 } 00:08:01.828 ] 00:08:01.828 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:01.828 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3a70738-3502-4e42-aa32-0b873def8000 00:08:01.828 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:02.090 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:02.090 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3a70738-3502-4e42-aa32-0b873def8000 00:08:02.090 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:02.090 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:02.090 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:02.351 [2024-11-20 11:08:54.946118] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:02.351 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3a70738-3502-4e42-aa32-0b873def8000 00:08:02.351 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:02.351 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3a70738-3502-4e42-aa32-0b873def8000 00:08:02.351 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:02.351 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:02.351 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:02.351 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:02.351 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:02.351 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:02.351 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:02.351 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:02.351 11:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3a70738-3502-4e42-aa32-0b873def8000 00:08:02.615 request: 00:08:02.615 { 00:08:02.615 "uuid": "c3a70738-3502-4e42-aa32-0b873def8000", 00:08:02.615 "method": "bdev_lvol_get_lvstores", 00:08:02.615 "req_id": 1 00:08:02.615 } 00:08:02.615 Got JSON-RPC error response 00:08:02.615 response: 00:08:02.615 { 00:08:02.615 "code": -19, 00:08:02.615 "message": "No such device" 00:08:02.615 } 00:08:02.615 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:02.615 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:02.615 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:02.615 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:02.615 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:02.615 aio_bdev 00:08:02.615 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b00ab326-8daf-4d55-a6ab-a9600dcdc707 00:08:02.615 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b00ab326-8daf-4d55-a6ab-a9600dcdc707 00:08:02.615 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:02.615 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:02.615 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:02.615 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:02.615 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:02.878 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b00ab326-8daf-4d55-a6ab-a9600dcdc707 -t 2000 00:08:03.137 [ 00:08:03.137 { 00:08:03.137 "name": "b00ab326-8daf-4d55-a6ab-a9600dcdc707", 00:08:03.137 "aliases": [ 00:08:03.137 "lvs/lvol" 00:08:03.137 ], 00:08:03.138 "product_name": "Logical Volume", 00:08:03.138 "block_size": 4096, 00:08:03.138 "num_blocks": 38912, 00:08:03.138 "uuid": "b00ab326-8daf-4d55-a6ab-a9600dcdc707", 00:08:03.138 "assigned_rate_limits": { 00:08:03.138 "rw_ios_per_sec": 0, 00:08:03.138 "rw_mbytes_per_sec": 0, 00:08:03.138 "r_mbytes_per_sec": 0, 00:08:03.138 "w_mbytes_per_sec": 0 00:08:03.138 }, 00:08:03.138 "claimed": false, 00:08:03.138 "zoned": false, 00:08:03.138 "supported_io_types": { 00:08:03.138 "read": true, 00:08:03.138 "write": true, 00:08:03.138 "unmap": true, 00:08:03.138 "flush": false, 00:08:03.138 "reset": true, 00:08:03.138 "nvme_admin": false, 00:08:03.138 "nvme_io": false, 00:08:03.138 "nvme_io_md": false, 00:08:03.138 "write_zeroes": true, 00:08:03.138 "zcopy": false, 00:08:03.138 "get_zone_info": false, 00:08:03.138 "zone_management": false, 00:08:03.138 "zone_append": false, 00:08:03.138 "compare": false, 00:08:03.138 "compare_and_write": false, 00:08:03.138 "abort": false, 00:08:03.138 "seek_hole": true, 00:08:03.138 "seek_data": true, 00:08:03.138 "copy": false, 00:08:03.138 "nvme_iov_md": false 00:08:03.138 }, 00:08:03.138 "driver_specific": { 00:08:03.138 "lvol": { 00:08:03.138 "lvol_store_uuid": "c3a70738-3502-4e42-aa32-0b873def8000", 00:08:03.138 "base_bdev": "aio_bdev", 00:08:03.138 "thin_provision": false, 00:08:03.138 "num_allocated_clusters": 38, 00:08:03.138 "snapshot": false, 00:08:03.138 "clone": false, 00:08:03.138 "esnap_clone": false 00:08:03.138 } 00:08:03.138 } 00:08:03.138 } 00:08:03.138 ] 00:08:03.138 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:03.138 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3a70738-3502-4e42-aa32-0b873def8000 00:08:03.138 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:03.138 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:03.138 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3a70738-3502-4e42-aa32-0b873def8000 00:08:03.138 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:03.398 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:03.398 11:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b00ab326-8daf-4d55-a6ab-a9600dcdc707 00:08:03.677 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c3a70738-3502-4e42-aa32-0b873def8000 00:08:03.677 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:03.991 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:03.992 00:08:03.992 real 0m17.250s 00:08:03.992 user 0m45.345s 00:08:03.992 sys 0m3.269s 00:08:03.992 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.992 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:03.992 ************************************ 00:08:03.992 END TEST lvs_grow_dirty 00:08:03.992 ************************************ 00:08:03.992 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:03.992 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:03.992 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:03.992 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:03.992 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:03.992 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:03.992 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:03.992 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:03.992 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:03.992 nvmf_trace.0 00:08:03.992 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:03.992 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:03.992 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:03.992 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:03.992 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:03.992 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:03.992 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:03.992 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:03.992 rmmod nvme_tcp 00:08:03.992 rmmod nvme_fabrics 00:08:03.992 rmmod nvme_keyring 00:08:03.992 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:03.992 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:03.992 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:03.992 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2555553 ']' 00:08:03.992 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2555553 00:08:03.992 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2555553 ']' 00:08:03.992 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2555553 00:08:03.992 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:03.992 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:03.992 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2555553 00:08:04.253 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:04.253 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:04.253 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2555553' 00:08:04.253 killing process with pid 2555553 00:08:04.253 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2555553 00:08:04.253 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2555553 00:08:04.253 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:04.253 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:04.253 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:04.253 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:04.253 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:04.253 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:04.253 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:04.253 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:04.253 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:04.253 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.253 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:04.253 11:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.800 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:06.800 00:08:06.800 real 0m44.691s 00:08:06.800 user 1m7.391s 00:08:06.800 sys 0m10.901s 00:08:06.800 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.800 11:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:06.800 ************************************ 00:08:06.800 END TEST nvmf_lvs_grow 00:08:06.800 ************************************ 00:08:06.800 11:08:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:06.800 11:08:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:06.800 11:08:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.800 11:08:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:06.800 ************************************ 00:08:06.800 START TEST nvmf_bdev_io_wait 00:08:06.800 ************************************ 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:06.800 * Looking for test storage... 00:08:06.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:06.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.800 --rc genhtml_branch_coverage=1 00:08:06.800 --rc genhtml_function_coverage=1 00:08:06.800 --rc genhtml_legend=1 00:08:06.800 --rc geninfo_all_blocks=1 00:08:06.800 --rc geninfo_unexecuted_blocks=1 00:08:06.800 00:08:06.800 ' 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:06.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.800 --rc genhtml_branch_coverage=1 00:08:06.800 --rc genhtml_function_coverage=1 00:08:06.800 --rc genhtml_legend=1 00:08:06.800 --rc geninfo_all_blocks=1 00:08:06.800 --rc geninfo_unexecuted_blocks=1 00:08:06.800 00:08:06.800 ' 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:06.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.800 --rc genhtml_branch_coverage=1 00:08:06.800 --rc genhtml_function_coverage=1 00:08:06.800 --rc genhtml_legend=1 00:08:06.800 --rc geninfo_all_blocks=1 00:08:06.800 --rc geninfo_unexecuted_blocks=1 00:08:06.800 00:08:06.800 ' 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:06.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.800 --rc genhtml_branch_coverage=1 00:08:06.800 --rc genhtml_function_coverage=1 00:08:06.800 --rc genhtml_legend=1 00:08:06.800 --rc geninfo_all_blocks=1 00:08:06.800 --rc geninfo_unexecuted_blocks=1 00:08:06.800 00:08:06.800 ' 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.800 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.801 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.801 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.801 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:06.801 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.801 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:06.801 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:06.801 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:06.801 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:06.801 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:06.801 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:06.801 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:06.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:06.801 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:06.801 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:06.801 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:06.801 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:06.801 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:06.801 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:06.801 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:06.801 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:06.801 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:06.801 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:06.801 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:06.801 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.801 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:06.801 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.801 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:06.801 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:06.801 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:06.801 11:08:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.946 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:14.946 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:14.946 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:14.946 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:14.946 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:14.946 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:14.946 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:14.946 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:14.946 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:14.947 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:14.947 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:14.947 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:14.947 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:14.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:14.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:08:14.947 00:08:14.947 --- 10.0.0.2 ping statistics --- 00:08:14.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.947 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:14.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:14.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:08:14.947 00:08:14.947 --- 10.0.0.1 ping statistics --- 00:08:14.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.947 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:14.947 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:14.948 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:14.948 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:14.948 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:14.948 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:14.948 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:14.948 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:14.948 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.948 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2560742 00:08:14.948 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2560742 00:08:14.948 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:14.948 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2560742 ']' 00:08:14.948 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.948 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:14.948 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.948 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:14.948 11:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.948 [2024-11-20 11:09:06.790166] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:08:14.948 [2024-11-20 11:09:06.790231] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.948 [2024-11-20 11:09:06.895135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:14.948 [2024-11-20 11:09:06.949712] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:14.948 [2024-11-20 11:09:06.949767] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:14.948 [2024-11-20 11:09:06.949776] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:14.948 [2024-11-20 11:09:06.949783] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:14.948 [2024-11-20 11:09:06.949789] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:14.948 [2024-11-20 11:09:06.951873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.948 [2024-11-20 11:09:06.951908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.948 [2024-11-20 11:09:06.952049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.948 [2024-11-20 11:09:06.952049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:14.948 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:14.948 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:14.948 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:14.948 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:14.948 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.948 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:14.948 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:14.948 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.948 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.948 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.948 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:14.948 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.948 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.210 [2024-11-20 11:09:07.743285] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.210 Malloc0 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.210 [2024-11-20 11:09:07.808845] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2561007 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2561011 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:15.210 { 00:08:15.210 "params": { 00:08:15.210 "name": "Nvme$subsystem", 00:08:15.210 "trtype": "$TEST_TRANSPORT", 00:08:15.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:15.210 "adrfam": "ipv4", 00:08:15.210 "trsvcid": "$NVMF_PORT", 00:08:15.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:15.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:15.210 "hdgst": ${hdgst:-false}, 00:08:15.210 "ddgst": ${ddgst:-false} 00:08:15.210 }, 00:08:15.210 "method": "bdev_nvme_attach_controller" 00:08:15.210 } 00:08:15.210 EOF 00:08:15.210 )") 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2561014 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:15.210 { 00:08:15.210 "params": { 00:08:15.210 "name": "Nvme$subsystem", 00:08:15.210 "trtype": "$TEST_TRANSPORT", 00:08:15.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:15.210 "adrfam": "ipv4", 00:08:15.210 "trsvcid": "$NVMF_PORT", 00:08:15.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:15.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:15.210 "hdgst": ${hdgst:-false}, 00:08:15.210 "ddgst": ${ddgst:-false} 00:08:15.210 }, 00:08:15.210 "method": "bdev_nvme_attach_controller" 00:08:15.210 } 00:08:15.210 EOF 00:08:15.210 )") 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2561017 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:15.210 { 00:08:15.210 "params": { 00:08:15.210 "name": "Nvme$subsystem", 00:08:15.210 "trtype": "$TEST_TRANSPORT", 00:08:15.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:15.210 "adrfam": "ipv4", 00:08:15.210 "trsvcid": "$NVMF_PORT", 00:08:15.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:15.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:15.210 "hdgst": ${hdgst:-false}, 00:08:15.210 "ddgst": ${ddgst:-false} 00:08:15.210 }, 00:08:15.210 "method": "bdev_nvme_attach_controller" 00:08:15.210 } 00:08:15.210 EOF 00:08:15.210 )") 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:15.210 { 00:08:15.210 "params": { 00:08:15.210 "name": "Nvme$subsystem", 00:08:15.210 "trtype": "$TEST_TRANSPORT", 00:08:15.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:15.210 "adrfam": "ipv4", 00:08:15.210 "trsvcid": "$NVMF_PORT", 00:08:15.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:15.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:15.210 "hdgst": ${hdgst:-false}, 00:08:15.210 "ddgst": ${ddgst:-false} 00:08:15.210 }, 00:08:15.210 "method": "bdev_nvme_attach_controller" 00:08:15.210 } 00:08:15.210 EOF 00:08:15.210 )") 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2561007 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:15.210 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:15.210 "params": { 00:08:15.210 "name": "Nvme1", 00:08:15.211 "trtype": "tcp", 00:08:15.211 "traddr": "10.0.0.2", 00:08:15.211 "adrfam": "ipv4", 00:08:15.211 "trsvcid": "4420", 00:08:15.211 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:15.211 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:15.211 "hdgst": false, 00:08:15.211 "ddgst": false 00:08:15.211 }, 00:08:15.211 "method": "bdev_nvme_attach_controller" 00:08:15.211 }' 00:08:15.211 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:15.211 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:15.211 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:15.211 "params": { 00:08:15.211 "name": "Nvme1", 00:08:15.211 "trtype": "tcp", 00:08:15.211 "traddr": "10.0.0.2", 00:08:15.211 "adrfam": "ipv4", 00:08:15.211 "trsvcid": "4420", 00:08:15.211 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:15.211 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:15.211 "hdgst": false, 00:08:15.211 "ddgst": false 00:08:15.211 }, 00:08:15.211 "method": "bdev_nvme_attach_controller" 00:08:15.211 }' 00:08:15.211 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:15.211 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:15.211 "params": { 00:08:15.211 "name": "Nvme1", 00:08:15.211 "trtype": "tcp", 00:08:15.211 "traddr": "10.0.0.2", 00:08:15.211 "adrfam": "ipv4", 00:08:15.211 "trsvcid": "4420", 00:08:15.211 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:15.211 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:15.211 "hdgst": false, 00:08:15.211 "ddgst": false 00:08:15.211 }, 00:08:15.211 "method": "bdev_nvme_attach_controller" 00:08:15.211 }' 00:08:15.211 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:15.211 11:09:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:15.211 "params": { 00:08:15.211 "name": "Nvme1", 00:08:15.211 "trtype": "tcp", 00:08:15.211 "traddr": "10.0.0.2", 00:08:15.211 "adrfam": "ipv4", 00:08:15.211 "trsvcid": "4420", 00:08:15.211 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:15.211 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:15.211 "hdgst": false, 00:08:15.211 "ddgst": false 00:08:15.211 }, 00:08:15.211 "method": "bdev_nvme_attach_controller" 00:08:15.211 }' 00:08:15.211 [2024-11-20 11:09:07.867497] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:08:15.211 [2024-11-20 11:09:07.867568] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:15.211 [2024-11-20 11:09:07.871084] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:08:15.211 [2024-11-20 11:09:07.871147] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:15.211 [2024-11-20 11:09:07.879899] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:08:15.211 [2024-11-20 11:09:07.879968] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:15.211 [2024-11-20 11:09:07.882415] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:08:15.211 [2024-11-20 11:09:07.882501] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:15.478 [2024-11-20 11:09:08.083908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.478 [2024-11-20 11:09:08.126628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:15.478 [2024-11-20 11:09:08.150115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.478 [2024-11-20 11:09:08.188715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:15.739 [2024-11-20 11:09:08.245376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.739 [2024-11-20 11:09:08.282518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:15.739 [2024-11-20 11:09:08.308349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.739 [2024-11-20 11:09:08.347906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:15.739 Running I/O for 1 seconds... 00:08:15.739 Running I/O for 1 seconds... 00:08:15.999 Running I/O for 1 seconds... 00:08:15.999 Running I/O for 1 seconds... 00:08:16.944 7359.00 IOPS, 28.75 MiB/s 00:08:16.944 Latency(us) 00:08:16.944 [2024-11-20T10:09:09.686Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.944 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:16.944 Nvme1n1 : 1.02 7343.62 28.69 0.00 0.00 17227.10 6089.39 25449.81 00:08:16.944 [2024-11-20T10:09:09.686Z] =================================================================================================================== 00:08:16.944 [2024-11-20T10:09:09.686Z] Total : 7343.62 28.69 0.00 0.00 17227.10 6089.39 25449.81 00:08:16.944 181976.00 IOPS, 710.84 MiB/s 00:08:16.944 Latency(us) 00:08:16.944 [2024-11-20T10:09:09.686Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.944 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:16.944 Nvme1n1 : 1.00 181618.40 709.45 0.00 0.00 700.73 302.08 1966.08 00:08:16.944 [2024-11-20T10:09:09.686Z] =================================================================================================================== 00:08:16.944 [2024-11-20T10:09:09.686Z] Total : 181618.40 709.45 0.00 0.00 700.73 302.08 1966.08 00:08:16.944 7175.00 IOPS, 28.03 MiB/s 00:08:16.944 Latency(us) 00:08:16.944 [2024-11-20T10:09:09.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.945 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:16.945 Nvme1n1 : 1.01 7290.23 28.48 0.00 0.00 17504.60 4614.83 32986.45 00:08:16.945 [2024-11-20T10:09:09.687Z] =================================================================================================================== 00:08:16.945 [2024-11-20T10:09:09.687Z] Total : 7290.23 28.48 0.00 0.00 17504.60 4614.83 32986.45 00:08:16.945 10878.00 IOPS, 42.49 MiB/s 00:08:16.945 Latency(us) 00:08:16.945 [2024-11-20T10:09:09.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.945 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:16.945 Nvme1n1 : 1.01 10930.34 42.70 0.00 0.00 11665.63 5406.72 21517.65 00:08:16.945 [2024-11-20T10:09:09.687Z] =================================================================================================================== 00:08:16.945 [2024-11-20T10:09:09.687Z] Total : 10930.34 42.70 0.00 0.00 11665.63 5406.72 21517.65 00:08:16.945 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2561011 00:08:17.214 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2561014 00:08:17.214 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2561017 00:08:17.214 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:17.214 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.214 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.214 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.214 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:17.214 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:17.214 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:17.214 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:17.214 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:17.214 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:17.214 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:17.214 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:17.214 rmmod nvme_tcp 00:08:17.214 rmmod nvme_fabrics 00:08:17.214 rmmod nvme_keyring 00:08:17.214 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:17.214 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:17.214 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:17.214 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2560742 ']' 00:08:17.214 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2560742 00:08:17.214 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2560742 ']' 00:08:17.214 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2560742 00:08:17.214 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:17.215 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:17.215 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2560742 00:08:17.215 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:17.215 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:17.215 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2560742' 00:08:17.215 killing process with pid 2560742 00:08:17.215 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2560742 00:08:17.215 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2560742 00:08:17.478 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:17.478 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:17.478 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:17.478 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:17.478 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:17.478 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:17.478 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:17.478 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:17.478 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:17.478 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.478 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:17.478 11:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.389 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:19.389 00:08:19.389 real 0m13.032s 00:08:19.389 user 0m19.524s 00:08:19.389 sys 0m7.348s 00:08:19.389 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.389 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:19.389 ************************************ 00:08:19.389 END TEST nvmf_bdev_io_wait 00:08:19.389 ************************************ 00:08:19.389 11:09:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:19.389 11:09:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:19.389 11:09:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.389 11:09:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:19.650 ************************************ 00:08:19.650 START TEST nvmf_queue_depth 00:08:19.650 ************************************ 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:19.650 * Looking for test storage... 00:08:19.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:19.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.650 --rc genhtml_branch_coverage=1 00:08:19.650 --rc genhtml_function_coverage=1 00:08:19.650 --rc genhtml_legend=1 00:08:19.650 --rc geninfo_all_blocks=1 00:08:19.650 --rc geninfo_unexecuted_blocks=1 00:08:19.650 00:08:19.650 ' 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:19.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.650 --rc genhtml_branch_coverage=1 00:08:19.650 --rc genhtml_function_coverage=1 00:08:19.650 --rc genhtml_legend=1 00:08:19.650 --rc geninfo_all_blocks=1 00:08:19.650 --rc geninfo_unexecuted_blocks=1 00:08:19.650 00:08:19.650 ' 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:19.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.650 --rc genhtml_branch_coverage=1 00:08:19.650 --rc genhtml_function_coverage=1 00:08:19.650 --rc genhtml_legend=1 00:08:19.650 --rc geninfo_all_blocks=1 00:08:19.650 --rc geninfo_unexecuted_blocks=1 00:08:19.650 00:08:19.650 ' 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:19.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.650 --rc genhtml_branch_coverage=1 00:08:19.650 --rc genhtml_function_coverage=1 00:08:19.650 --rc genhtml_legend=1 00:08:19.650 --rc geninfo_all_blocks=1 00:08:19.650 --rc geninfo_unexecuted_blocks=1 00:08:19.650 00:08:19.650 ' 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:19.650 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:19.651 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:19.651 11:09:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:27.801 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:27.801 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:27.801 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:27.801 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:27.801 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:27.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:27.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:08:27.802 00:08:27.802 --- 10.0.0.2 ping statistics --- 00:08:27.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.802 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:27.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:27.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:08:27.802 00:08:27.802 --- 10.0.0.1 ping statistics --- 00:08:27.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.802 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2566075 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2566075 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2566075 ']' 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:27.802 11:09:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.802 [2024-11-20 11:09:19.910883] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:08:27.802 [2024-11-20 11:09:19.910949] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.802 [2024-11-20 11:09:20.016129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.802 [2024-11-20 11:09:20.067586] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.802 [2024-11-20 11:09:20.067640] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.802 [2024-11-20 11:09:20.067649] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:27.802 [2024-11-20 11:09:20.067657] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:27.802 [2024-11-20 11:09:20.067664] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.802 [2024-11-20 11:09:20.068445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.063 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.063 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:28.063 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:28.063 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:28.063 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.063 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.063 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:28.063 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.063 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.063 [2024-11-20 11:09:20.774134] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.063 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.063 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:28.063 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.063 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.325 Malloc0 00:08:28.325 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.325 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:28.325 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.325 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.325 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.325 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:28.325 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.325 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.325 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.325 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:28.325 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.325 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.325 [2024-11-20 11:09:20.835370] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:28.325 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.325 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2566288 00:08:28.325 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:28.325 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:28.325 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2566288 /var/tmp/bdevperf.sock 00:08:28.325 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2566288 ']' 00:08:28.325 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:28.325 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.325 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:28.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:28.326 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.326 11:09:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.326 [2024-11-20 11:09:20.900863] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:08:28.326 [2024-11-20 11:09:20.900957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2566288 ] 00:08:28.326 [2024-11-20 11:09:20.993598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.326 [2024-11-20 11:09:21.047334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.269 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:29.269 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:29.269 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:29.269 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.269 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:29.269 NVMe0n1 00:08:29.269 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.269 11:09:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:29.269 Running I/O for 10 seconds... 00:08:31.597 9216.00 IOPS, 36.00 MiB/s [2024-11-20T10:09:25.280Z] 10418.00 IOPS, 40.70 MiB/s [2024-11-20T10:09:26.223Z] 10917.33 IOPS, 42.65 MiB/s [2024-11-20T10:09:27.166Z] 11191.00 IOPS, 43.71 MiB/s [2024-11-20T10:09:28.109Z] 11654.20 IOPS, 45.52 MiB/s [2024-11-20T10:09:29.051Z] 11945.00 IOPS, 46.66 MiB/s [2024-11-20T10:09:29.992Z] 12259.00 IOPS, 47.89 MiB/s [2024-11-20T10:09:31.377Z] 12409.75 IOPS, 48.48 MiB/s [2024-11-20T10:09:32.319Z] 12551.22 IOPS, 49.03 MiB/s [2024-11-20T10:09:32.319Z] 12691.40 IOPS, 49.58 MiB/s 00:08:39.577 Latency(us) 00:08:39.577 [2024-11-20T10:09:32.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.577 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:39.577 Verification LBA range: start 0x0 length 0x4000 00:08:39.577 NVMe0n1 : 10.06 12723.17 49.70 0.00 0.00 80227.16 20534.61 72526.51 00:08:39.577 [2024-11-20T10:09:32.319Z] =================================================================================================================== 00:08:39.577 [2024-11-20T10:09:32.319Z] Total : 12723.17 49.70 0.00 0.00 80227.16 20534.61 72526.51 00:08:39.577 { 00:08:39.577 "results": [ 00:08:39.577 { 00:08:39.577 "job": "NVMe0n1", 00:08:39.577 "core_mask": "0x1", 00:08:39.577 "workload": "verify", 00:08:39.577 "status": "finished", 00:08:39.577 "verify_range": { 00:08:39.577 "start": 0, 00:08:39.577 "length": 16384 00:08:39.577 }, 00:08:39.577 "queue_depth": 1024, 00:08:39.577 "io_size": 4096, 00:08:39.577 "runtime": 10.055352, 00:08:39.577 "iops": 12723.174683491936, 00:08:39.577 "mibps": 49.699901107390374, 00:08:39.577 "io_failed": 0, 00:08:39.577 "io_timeout": 0, 00:08:39.577 "avg_latency_us": 80227.15613806904, 00:08:39.577 "min_latency_us": 20534.613333333335, 00:08:39.577 "max_latency_us": 72526.50666666667 00:08:39.577 } 00:08:39.577 ], 00:08:39.577 "core_count": 1 00:08:39.577 } 00:08:39.577 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2566288 00:08:39.577 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2566288 ']' 00:08:39.577 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2566288 00:08:39.577 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:39.577 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.577 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2566288 00:08:39.577 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:39.577 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:39.577 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2566288' 00:08:39.577 killing process with pid 2566288 00:08:39.577 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2566288 00:08:39.577 Received shutdown signal, test time was about 10.000000 seconds 00:08:39.577 00:08:39.577 Latency(us) 00:08:39.577 [2024-11-20T10:09:32.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.577 [2024-11-20T10:09:32.319Z] =================================================================================================================== 00:08:39.577 [2024-11-20T10:09:32.319Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:39.577 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2566288 00:08:39.577 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:39.577 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:39.577 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:39.577 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:39.577 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:39.577 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:39.577 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:39.577 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:39.577 rmmod nvme_tcp 00:08:39.577 rmmod nvme_fabrics 00:08:39.577 rmmod nvme_keyring 00:08:39.577 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:39.577 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:39.577 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:39.577 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2566075 ']' 00:08:39.577 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2566075 00:08:39.577 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2566075 ']' 00:08:39.577 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2566075 00:08:39.577 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:39.577 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.577 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2566075 00:08:39.838 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:39.838 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:39.838 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2566075' 00:08:39.838 killing process with pid 2566075 00:08:39.838 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2566075 00:08:39.838 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2566075 00:08:39.838 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:39.838 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:39.838 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:39.838 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:39.838 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:39.838 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:39.838 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:39.838 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:39.838 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:39.838 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.839 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.839 11:09:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:42.386 00:08:42.386 real 0m22.397s 00:08:42.386 user 0m25.731s 00:08:42.386 sys 0m6.948s 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:42.386 ************************************ 00:08:42.386 END TEST nvmf_queue_depth 00:08:42.386 ************************************ 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:42.386 ************************************ 00:08:42.386 START TEST nvmf_target_multipath 00:08:42.386 ************************************ 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:42.386 * Looking for test storage... 00:08:42.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:42.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.386 --rc genhtml_branch_coverage=1 00:08:42.386 --rc genhtml_function_coverage=1 00:08:42.386 --rc genhtml_legend=1 00:08:42.386 --rc geninfo_all_blocks=1 00:08:42.386 --rc geninfo_unexecuted_blocks=1 00:08:42.386 00:08:42.386 ' 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:42.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.386 --rc genhtml_branch_coverage=1 00:08:42.386 --rc genhtml_function_coverage=1 00:08:42.386 --rc genhtml_legend=1 00:08:42.386 --rc geninfo_all_blocks=1 00:08:42.386 --rc geninfo_unexecuted_blocks=1 00:08:42.386 00:08:42.386 ' 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:42.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.386 --rc genhtml_branch_coverage=1 00:08:42.386 --rc genhtml_function_coverage=1 00:08:42.386 --rc genhtml_legend=1 00:08:42.386 --rc geninfo_all_blocks=1 00:08:42.386 --rc geninfo_unexecuted_blocks=1 00:08:42.386 00:08:42.386 ' 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:42.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.386 --rc genhtml_branch_coverage=1 00:08:42.386 --rc genhtml_function_coverage=1 00:08:42.386 --rc genhtml_legend=1 00:08:42.386 --rc geninfo_all_blocks=1 00:08:42.386 --rc geninfo_unexecuted_blocks=1 00:08:42.386 00:08:42.386 ' 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.386 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:42.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:42.387 11:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:50.534 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:50.534 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:50.534 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:50.534 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:50.534 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:50.534 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:50.534 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:50.534 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:50.534 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:50.534 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:50.534 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:50.534 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:50.534 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:50.534 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:50.534 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:50.534 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:50.534 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:50.535 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:50.535 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:50.535 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:50.535 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:50.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:50.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:08:50.535 00:08:50.535 --- 10.0.0.2 ping statistics --- 00:08:50.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.535 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:50.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:50.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:08:50.535 00:08:50.535 --- 10.0.0.1 ping statistics --- 00:08:50.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.535 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:50.535 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:50.535 only one NIC for nvmf test 00:08:50.536 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:50.536 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:50.536 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:50.536 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:50.536 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:50.536 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:50.536 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:50.536 rmmod nvme_tcp 00:08:50.536 rmmod nvme_fabrics 00:08:50.536 rmmod nvme_keyring 00:08:50.536 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:50.536 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:50.536 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:50.536 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:50.536 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:50.536 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:50.536 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:50.536 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:50.536 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:50.536 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:50.536 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:50.536 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:50.536 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:50.536 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.536 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.536 11:09:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.923 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:51.923 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:51.923 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:51.923 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:51.923 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:51.923 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:51.923 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:51.923 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:51.923 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:51.923 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:51.923 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:51.923 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:51.923 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:51.923 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:51.923 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:51.923 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:51.923 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:51.923 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:51.923 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:51.924 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:51.924 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:51.924 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:51.924 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.924 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.924 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.924 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:51.924 00:08:51.924 real 0m9.976s 00:08:51.924 user 0m2.222s 00:08:51.924 sys 0m5.707s 00:08:51.924 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.924 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:51.924 ************************************ 00:08:51.924 END TEST nvmf_target_multipath 00:08:51.924 ************************************ 00:08:51.924 11:09:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:51.924 11:09:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:51.924 11:09:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.924 11:09:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:52.186 ************************************ 00:08:52.186 START TEST nvmf_zcopy 00:08:52.186 ************************************ 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:52.186 * Looking for test storage... 00:08:52.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:52.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.186 --rc genhtml_branch_coverage=1 00:08:52.186 --rc genhtml_function_coverage=1 00:08:52.186 --rc genhtml_legend=1 00:08:52.186 --rc geninfo_all_blocks=1 00:08:52.186 --rc geninfo_unexecuted_blocks=1 00:08:52.186 00:08:52.186 ' 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:52.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.186 --rc genhtml_branch_coverage=1 00:08:52.186 --rc genhtml_function_coverage=1 00:08:52.186 --rc genhtml_legend=1 00:08:52.186 --rc geninfo_all_blocks=1 00:08:52.186 --rc geninfo_unexecuted_blocks=1 00:08:52.186 00:08:52.186 ' 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:52.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.186 --rc genhtml_branch_coverage=1 00:08:52.186 --rc genhtml_function_coverage=1 00:08:52.186 --rc genhtml_legend=1 00:08:52.186 --rc geninfo_all_blocks=1 00:08:52.186 --rc geninfo_unexecuted_blocks=1 00:08:52.186 00:08:52.186 ' 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:52.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.186 --rc genhtml_branch_coverage=1 00:08:52.186 --rc genhtml_function_coverage=1 00:08:52.186 --rc genhtml_legend=1 00:08:52.186 --rc geninfo_all_blocks=1 00:08:52.186 --rc geninfo_unexecuted_blocks=1 00:08:52.186 00:08:52.186 ' 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.186 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.187 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.187 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.187 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:52.187 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.187 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:52.187 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:52.187 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:52.187 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.187 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.187 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.187 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:52.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:52.187 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:52.187 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:52.187 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:52.187 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:52.187 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:52.187 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:52.187 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:52.187 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:52.187 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:52.187 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.187 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:52.187 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.187 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:52.187 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:52.187 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:52.187 11:09:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:00.328 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:00.328 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:00.328 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:00.328 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:00.328 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:00.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:00.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.707 ms 00:09:00.329 00:09:00.329 --- 10.0.0.2 ping statistics --- 00:09:00.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.329 rtt min/avg/max/mdev = 0.707/0.707/0.707/0.000 ms 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:00.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:00.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:09:00.329 00:09:00.329 --- 10.0.0.1 ping statistics --- 00:09:00.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.329 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2576975 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2576975 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2576975 ']' 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:00.329 11:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:00.329 [2024-11-20 11:09:52.540238] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:09:00.329 [2024-11-20 11:09:52.540303] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.329 [2024-11-20 11:09:52.643687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.329 [2024-11-20 11:09:52.694116] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:00.329 [2024-11-20 11:09:52.694177] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:00.329 [2024-11-20 11:09:52.694186] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:00.329 [2024-11-20 11:09:52.694193] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:00.329 [2024-11-20 11:09:52.694199] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:00.329 [2024-11-20 11:09:52.694961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:00.898 [2024-11-20 11:09:53.421368] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:00.898 [2024-11-20 11:09:53.445670] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:00.898 malloc0 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:00.898 { 00:09:00.898 "params": { 00:09:00.898 "name": "Nvme$subsystem", 00:09:00.898 "trtype": "$TEST_TRANSPORT", 00:09:00.898 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:00.898 "adrfam": "ipv4", 00:09:00.898 "trsvcid": "$NVMF_PORT", 00:09:00.898 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:00.898 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:00.898 "hdgst": ${hdgst:-false}, 00:09:00.898 "ddgst": ${ddgst:-false} 00:09:00.898 }, 00:09:00.898 "method": "bdev_nvme_attach_controller" 00:09:00.898 } 00:09:00.898 EOF 00:09:00.898 )") 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:00.898 11:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:00.898 "params": { 00:09:00.898 "name": "Nvme1", 00:09:00.898 "trtype": "tcp", 00:09:00.898 "traddr": "10.0.0.2", 00:09:00.898 "adrfam": "ipv4", 00:09:00.898 "trsvcid": "4420", 00:09:00.898 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:00.898 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:00.898 "hdgst": false, 00:09:00.898 "ddgst": false 00:09:00.898 }, 00:09:00.898 "method": "bdev_nvme_attach_controller" 00:09:00.898 }' 00:09:00.898 [2024-11-20 11:09:53.547810] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:09:00.898 [2024-11-20 11:09:53.547878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2577320 ] 00:09:01.157 [2024-11-20 11:09:53.641694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.157 [2024-11-20 11:09:53.695034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.416 Running I/O for 10 seconds... 00:09:03.294 6709.00 IOPS, 52.41 MiB/s [2024-11-20T10:09:56.976Z] 8188.00 IOPS, 63.97 MiB/s [2024-11-20T10:09:58.359Z] 8688.33 IOPS, 67.88 MiB/s [2024-11-20T10:09:59.300Z] 8943.00 IOPS, 69.87 MiB/s [2024-11-20T10:10:00.244Z] 9096.20 IOPS, 71.06 MiB/s [2024-11-20T10:10:01.279Z] 9192.83 IOPS, 71.82 MiB/s [2024-11-20T10:10:02.223Z] 9261.86 IOPS, 72.36 MiB/s [2024-11-20T10:10:03.162Z] 9316.00 IOPS, 72.78 MiB/s [2024-11-20T10:10:04.102Z] 9358.67 IOPS, 73.11 MiB/s [2024-11-20T10:10:04.103Z] 9392.10 IOPS, 73.38 MiB/s 00:09:11.361 Latency(us) 00:09:11.361 [2024-11-20T10:10:04.103Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:11.361 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:11.361 Verification LBA range: start 0x0 length 0x1000 00:09:11.361 Nvme1n1 : 10.01 9392.92 73.38 0.00 0.00 13580.73 733.87 27634.35 00:09:11.361 [2024-11-20T10:10:04.103Z] =================================================================================================================== 00:09:11.361 [2024-11-20T10:10:04.103Z] Total : 9392.92 73.38 0.00 0.00 13580.73 733.87 27634.35 00:09:11.361 11:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2579348 00:09:11.361 11:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:11.361 11:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:11.361 11:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:11.361 11:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:11.361 11:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:11.361 11:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:11.361 11:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:11.361 11:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:11.361 { 00:09:11.361 "params": { 00:09:11.361 "name": "Nvme$subsystem", 00:09:11.361 "trtype": "$TEST_TRANSPORT", 00:09:11.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:11.361 "adrfam": "ipv4", 00:09:11.361 "trsvcid": "$NVMF_PORT", 00:09:11.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:11.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:11.361 "hdgst": ${hdgst:-false}, 00:09:11.361 "ddgst": ${ddgst:-false} 00:09:11.361 }, 00:09:11.361 "method": "bdev_nvme_attach_controller" 00:09:11.361 } 00:09:11.361 EOF 00:09:11.361 )") 00:09:11.361 11:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:11.361 [2024-11-20 11:10:04.087078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.361 [2024-11-20 11:10:04.087108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.361 11:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:11.361 11:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:11.361 11:10:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:11.361 "params": { 00:09:11.361 "name": "Nvme1", 00:09:11.361 "trtype": "tcp", 00:09:11.361 "traddr": "10.0.0.2", 00:09:11.361 "adrfam": "ipv4", 00:09:11.361 "trsvcid": "4420", 00:09:11.361 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:11.361 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:11.361 "hdgst": false, 00:09:11.361 "ddgst": false 00:09:11.361 }, 00:09:11.361 "method": "bdev_nvme_attach_controller" 00:09:11.361 }' 00:09:11.361 [2024-11-20 11:10:04.099074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.361 [2024-11-20 11:10:04.099088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.621 [2024-11-20 11:10:04.111103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.621 [2024-11-20 11:10:04.111110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.621 [2024-11-20 11:10:04.123135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.621 [2024-11-20 11:10:04.123142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.621 [2024-11-20 11:10:04.129369] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:09:11.621 [2024-11-20 11:10:04.129416] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2579348 ] 00:09:11.621 [2024-11-20 11:10:04.135168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.621 [2024-11-20 11:10:04.135175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.621 [2024-11-20 11:10:04.147207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.621 [2024-11-20 11:10:04.147214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.621 [2024-11-20 11:10:04.159225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.621 [2024-11-20 11:10:04.159232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.621 [2024-11-20 11:10:04.171256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.621 [2024-11-20 11:10:04.171263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.622 [2024-11-20 11:10:04.183287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.622 [2024-11-20 11:10:04.183295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.622 [2024-11-20 11:10:04.195316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.622 [2024-11-20 11:10:04.195323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.622 [2024-11-20 11:10:04.207348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.622 [2024-11-20 11:10:04.207355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.622 [2024-11-20 11:10:04.212812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.622 [2024-11-20 11:10:04.219379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.622 [2024-11-20 11:10:04.219388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.622 [2024-11-20 11:10:04.231410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.622 [2024-11-20 11:10:04.231419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.622 [2024-11-20 11:10:04.242112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.622 [2024-11-20 11:10:04.243439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.622 [2024-11-20 11:10:04.243446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.622 [2024-11-20 11:10:04.255477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.622 [2024-11-20 11:10:04.255486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.622 [2024-11-20 11:10:04.267506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.622 [2024-11-20 11:10:04.267519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.622 [2024-11-20 11:10:04.279535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.622 [2024-11-20 11:10:04.279545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.622 [2024-11-20 11:10:04.291565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.622 [2024-11-20 11:10:04.291579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.622 [2024-11-20 11:10:04.303596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.622 [2024-11-20 11:10:04.303603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.622 [2024-11-20 11:10:04.315642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.622 [2024-11-20 11:10:04.315658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.622 [2024-11-20 11:10:04.327664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.622 [2024-11-20 11:10:04.327673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.622 [2024-11-20 11:10:04.339695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.622 [2024-11-20 11:10:04.339705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.622 [2024-11-20 11:10:04.351724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.622 [2024-11-20 11:10:04.351734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.882 [2024-11-20 11:10:04.363754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.882 [2024-11-20 11:10:04.363763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.882 [2024-11-20 11:10:04.375791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.882 [2024-11-20 11:10:04.375806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.882 Running I/O for 5 seconds... 00:09:11.882 [2024-11-20 11:10:04.387814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.882 [2024-11-20 11:10:04.387821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.882 [2024-11-20 11:10:04.402703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.882 [2024-11-20 11:10:04.402720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.882 [2024-11-20 11:10:04.416502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.882 [2024-11-20 11:10:04.416518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.882 [2024-11-20 11:10:04.429231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.882 [2024-11-20 11:10:04.429247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.882 [2024-11-20 11:10:04.442747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.882 [2024-11-20 11:10:04.442762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.882 [2024-11-20 11:10:04.455442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.882 [2024-11-20 11:10:04.455458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.882 [2024-11-20 11:10:04.468457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.882 [2024-11-20 11:10:04.468472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.882 [2024-11-20 11:10:04.481918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.882 [2024-11-20 11:10:04.481934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.882 [2024-11-20 11:10:04.494808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.882 [2024-11-20 11:10:04.494823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.882 [2024-11-20 11:10:04.508182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.882 [2024-11-20 11:10:04.508197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.882 [2024-11-20 11:10:04.521472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.882 [2024-11-20 11:10:04.521487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.882 [2024-11-20 11:10:04.534872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.882 [2024-11-20 11:10:04.534888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.882 [2024-11-20 11:10:04.548114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.882 [2024-11-20 11:10:04.548129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.882 [2024-11-20 11:10:04.561675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.882 [2024-11-20 11:10:04.561691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.882 [2024-11-20 11:10:04.574243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.882 [2024-11-20 11:10:04.574258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.882 [2024-11-20 11:10:04.587195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.882 [2024-11-20 11:10:04.587210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.882 [2024-11-20 11:10:04.599898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.882 [2024-11-20 11:10:04.599913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.882 [2024-11-20 11:10:04.613604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.882 [2024-11-20 11:10:04.613619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.143 [2024-11-20 11:10:04.627034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.143 [2024-11-20 11:10:04.627049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.143 [2024-11-20 11:10:04.640445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.143 [2024-11-20 11:10:04.640460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.143 [2024-11-20 11:10:04.653894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.143 [2024-11-20 11:10:04.653910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.143 [2024-11-20 11:10:04.667566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.143 [2024-11-20 11:10:04.667581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.143 [2024-11-20 11:10:04.680386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.143 [2024-11-20 11:10:04.680401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.143 [2024-11-20 11:10:04.693592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.143 [2024-11-20 11:10:04.693607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.143 [2024-11-20 11:10:04.706648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.143 [2024-11-20 11:10:04.706663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.143 [2024-11-20 11:10:04.719645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.143 [2024-11-20 11:10:04.719660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.143 [2024-11-20 11:10:04.732941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.143 [2024-11-20 11:10:04.732956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.143 [2024-11-20 11:10:04.745824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.143 [2024-11-20 11:10:04.745840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.143 [2024-11-20 11:10:04.759127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.143 [2024-11-20 11:10:04.759142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.143 [2024-11-20 11:10:04.772503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.143 [2024-11-20 11:10:04.772518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.143 [2024-11-20 11:10:04.786115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.144 [2024-11-20 11:10:04.786131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.144 [2024-11-20 11:10:04.799613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.144 [2024-11-20 11:10:04.799628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.144 [2024-11-20 11:10:04.812222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.144 [2024-11-20 11:10:04.812237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.144 [2024-11-20 11:10:04.825083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.144 [2024-11-20 11:10:04.825098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.144 [2024-11-20 11:10:04.838747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.144 [2024-11-20 11:10:04.838762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.144 [2024-11-20 11:10:04.851611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.144 [2024-11-20 11:10:04.851625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.144 [2024-11-20 11:10:04.864449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.144 [2024-11-20 11:10:04.864464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.144 [2024-11-20 11:10:04.877173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.144 [2024-11-20 11:10:04.877188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.404 [2024-11-20 11:10:04.890580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.404 [2024-11-20 11:10:04.890595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.404 [2024-11-20 11:10:04.902869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.404 [2024-11-20 11:10:04.902884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.404 [2024-11-20 11:10:04.916718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.404 [2024-11-20 11:10:04.916733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.404 [2024-11-20 11:10:04.929228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.404 [2024-11-20 11:10:04.929243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.404 [2024-11-20 11:10:04.942835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.404 [2024-11-20 11:10:04.942850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.404 [2024-11-20 11:10:04.956408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.404 [2024-11-20 11:10:04.956422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.404 [2024-11-20 11:10:04.969753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.404 [2024-11-20 11:10:04.969768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.404 [2024-11-20 11:10:04.982578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.404 [2024-11-20 11:10:04.982592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.404 [2024-11-20 11:10:04.996016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.404 [2024-11-20 11:10:04.996031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.405 [2024-11-20 11:10:05.009375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.405 [2024-11-20 11:10:05.009390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.405 [2024-11-20 11:10:05.022471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.405 [2024-11-20 11:10:05.022485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.405 [2024-11-20 11:10:05.035736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.405 [2024-11-20 11:10:05.035751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.405 [2024-11-20 11:10:05.049427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.405 [2024-11-20 11:10:05.049442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.405 [2024-11-20 11:10:05.062093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.405 [2024-11-20 11:10:05.062108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.405 [2024-11-20 11:10:05.075433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.405 [2024-11-20 11:10:05.075448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.405 [2024-11-20 11:10:05.088449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.405 [2024-11-20 11:10:05.088464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.405 [2024-11-20 11:10:05.101682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.405 [2024-11-20 11:10:05.101697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.405 [2024-11-20 11:10:05.114203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.405 [2024-11-20 11:10:05.114218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.405 [2024-11-20 11:10:05.127155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.405 [2024-11-20 11:10:05.127173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.405 [2024-11-20 11:10:05.140643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.405 [2024-11-20 11:10:05.140657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.665 [2024-11-20 11:10:05.153659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.665 [2024-11-20 11:10:05.153674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.665 [2024-11-20 11:10:05.166316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.665 [2024-11-20 11:10:05.166331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.665 [2024-11-20 11:10:05.179934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.665 [2024-11-20 11:10:05.179948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.665 [2024-11-20 11:10:05.193342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.665 [2024-11-20 11:10:05.193356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.665 [2024-11-20 11:10:05.206650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.665 [2024-11-20 11:10:05.206664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.665 [2024-11-20 11:10:05.219923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.665 [2024-11-20 11:10:05.219937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.665 [2024-11-20 11:10:05.232914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.665 [2024-11-20 11:10:05.232928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.665 [2024-11-20 11:10:05.245858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.665 [2024-11-20 11:10:05.245873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.665 [2024-11-20 11:10:05.259087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.665 [2024-11-20 11:10:05.259101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.665 [2024-11-20 11:10:05.272525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.665 [2024-11-20 11:10:05.272539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.665 [2024-11-20 11:10:05.286082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.665 [2024-11-20 11:10:05.286096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.665 [2024-11-20 11:10:05.298833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.665 [2024-11-20 11:10:05.298847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.665 [2024-11-20 11:10:05.311189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.665 [2024-11-20 11:10:05.311203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.665 [2024-11-20 11:10:05.324578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.665 [2024-11-20 11:10:05.324592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.665 [2024-11-20 11:10:05.338139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.665 [2024-11-20 11:10:05.338153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.665 [2024-11-20 11:10:05.350809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.665 [2024-11-20 11:10:05.350823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.665 [2024-11-20 11:10:05.363871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.665 [2024-11-20 11:10:05.363886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.665 [2024-11-20 11:10:05.376182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.665 [2024-11-20 11:10:05.376196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.665 19118.00 IOPS, 149.36 MiB/s [2024-11-20T10:10:05.407Z] [2024-11-20 11:10:05.389476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.665 [2024-11-20 11:10:05.389491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.665 [2024-11-20 11:10:05.402428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.665 [2024-11-20 11:10:05.402442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.926 [2024-11-20 11:10:05.415921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.926 [2024-11-20 11:10:05.415936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.926 [2024-11-20 11:10:05.429453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.926 [2024-11-20 11:10:05.429467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.926 [2024-11-20 11:10:05.441980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.926 [2024-11-20 11:10:05.441994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.926 [2024-11-20 11:10:05.455046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.926 [2024-11-20 11:10:05.455061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.926 [2024-11-20 11:10:05.468676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.926 [2024-11-20 11:10:05.468691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.926 [2024-11-20 11:10:05.481031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.926 [2024-11-20 11:10:05.481045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.926 [2024-11-20 11:10:05.493874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.926 [2024-11-20 11:10:05.493888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.926 [2024-11-20 11:10:05.507328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.926 [2024-11-20 11:10:05.507342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.926 [2024-11-20 11:10:05.520476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.926 [2024-11-20 11:10:05.520494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.926 [2024-11-20 11:10:05.534108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.926 [2024-11-20 11:10:05.534122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.926 [2024-11-20 11:10:05.546423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.926 [2024-11-20 11:10:05.546438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.926 [2024-11-20 11:10:05.558825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.926 [2024-11-20 11:10:05.558839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.926 [2024-11-20 11:10:05.572094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.926 [2024-11-20 11:10:05.572108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.926 [2024-11-20 11:10:05.585527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.926 [2024-11-20 11:10:05.585541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.926 [2024-11-20 11:10:05.598756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.926 [2024-11-20 11:10:05.598770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.926 [2024-11-20 11:10:05.612222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.926 [2024-11-20 11:10:05.612236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.926 [2024-11-20 11:10:05.625794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.926 [2024-11-20 11:10:05.625809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.926 [2024-11-20 11:10:05.639105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.926 [2024-11-20 11:10:05.639120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.926 [2024-11-20 11:10:05.652091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.926 [2024-11-20 11:10:05.652106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.926 [2024-11-20 11:10:05.664792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.926 [2024-11-20 11:10:05.664807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.186 [2024-11-20 11:10:05.677264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.186 [2024-11-20 11:10:05.677280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.186 [2024-11-20 11:10:05.689775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.186 [2024-11-20 11:10:05.689790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.186 [2024-11-20 11:10:05.703222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.186 [2024-11-20 11:10:05.703237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.186 [2024-11-20 11:10:05.716548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.186 [2024-11-20 11:10:05.716562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.186 [2024-11-20 11:10:05.729477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.186 [2024-11-20 11:10:05.729491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.186 [2024-11-20 11:10:05.742449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.186 [2024-11-20 11:10:05.742464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.186 [2024-11-20 11:10:05.754957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.186 [2024-11-20 11:10:05.754972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.186 [2024-11-20 11:10:05.768811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.186 [2024-11-20 11:10:05.768829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.186 [2024-11-20 11:10:05.781629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.187 [2024-11-20 11:10:05.781644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.187 [2024-11-20 11:10:05.795120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.187 [2024-11-20 11:10:05.795135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.187 [2024-11-20 11:10:05.808715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.187 [2024-11-20 11:10:05.808729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.187 [2024-11-20 11:10:05.821932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.187 [2024-11-20 11:10:05.821947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.187 [2024-11-20 11:10:05.834723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.187 [2024-11-20 11:10:05.834738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.187 [2024-11-20 11:10:05.847874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.187 [2024-11-20 11:10:05.847889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.187 [2024-11-20 11:10:05.860522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.187 [2024-11-20 11:10:05.860537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.187 [2024-11-20 11:10:05.874323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.187 [2024-11-20 11:10:05.874337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.187 [2024-11-20 11:10:05.886823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.187 [2024-11-20 11:10:05.886837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.187 [2024-11-20 11:10:05.899152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.187 [2024-11-20 11:10:05.899173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.187 [2024-11-20 11:10:05.911370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.187 [2024-11-20 11:10:05.911384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.187 [2024-11-20 11:10:05.923840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.187 [2024-11-20 11:10:05.923855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.447 [2024-11-20 11:10:05.936922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.447 [2024-11-20 11:10:05.936938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.447 [2024-11-20 11:10:05.949602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.447 [2024-11-20 11:10:05.949617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.447 [2024-11-20 11:10:05.962511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.447 [2024-11-20 11:10:05.962525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.447 [2024-11-20 11:10:05.975994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.447 [2024-11-20 11:10:05.976008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.447 [2024-11-20 11:10:05.988716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.447 [2024-11-20 11:10:05.988731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.447 [2024-11-20 11:10:06.002097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.447 [2024-11-20 11:10:06.002112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.447 [2024-11-20 11:10:06.015037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.447 [2024-11-20 11:10:06.015055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.447 [2024-11-20 11:10:06.027443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.447 [2024-11-20 11:10:06.027458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.447 [2024-11-20 11:10:06.040094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.447 [2024-11-20 11:10:06.040108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.447 [2024-11-20 11:10:06.053507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.447 [2024-11-20 11:10:06.053521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.447 [2024-11-20 11:10:06.066420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.447 [2024-11-20 11:10:06.066434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.447 [2024-11-20 11:10:06.079890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.447 [2024-11-20 11:10:06.079905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.447 [2024-11-20 11:10:06.093288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.447 [2024-11-20 11:10:06.093303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.447 [2024-11-20 11:10:06.106725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.447 [2024-11-20 11:10:06.106739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.447 [2024-11-20 11:10:06.120035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.447 [2024-11-20 11:10:06.120049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.447 [2024-11-20 11:10:06.133367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.447 [2024-11-20 11:10:06.133382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.447 [2024-11-20 11:10:06.146328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.447 [2024-11-20 11:10:06.146342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.447 [2024-11-20 11:10:06.158962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.447 [2024-11-20 11:10:06.158977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.447 [2024-11-20 11:10:06.171871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.447 [2024-11-20 11:10:06.171885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.447 [2024-11-20 11:10:06.184834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.447 [2024-11-20 11:10:06.184848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.709 [2024-11-20 11:10:06.198555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.709 [2024-11-20 11:10:06.198571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.709 [2024-11-20 11:10:06.211310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.709 [2024-11-20 11:10:06.211325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.709 [2024-11-20 11:10:06.224733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.709 [2024-11-20 11:10:06.224748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.709 [2024-11-20 11:10:06.238141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.709 [2024-11-20 11:10:06.238156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.709 [2024-11-20 11:10:06.251026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.709 [2024-11-20 11:10:06.251041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.709 [2024-11-20 11:10:06.264573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.709 [2024-11-20 11:10:06.264588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.709 [2024-11-20 11:10:06.277270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.709 [2024-11-20 11:10:06.277284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.709 [2024-11-20 11:10:06.290574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.709 [2024-11-20 11:10:06.290588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.709 [2024-11-20 11:10:06.303993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.709 [2024-11-20 11:10:06.304008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.709 [2024-11-20 11:10:06.317213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.709 [2024-11-20 11:10:06.317228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.709 [2024-11-20 11:10:06.330689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.709 [2024-11-20 11:10:06.330704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.709 [2024-11-20 11:10:06.344124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.709 [2024-11-20 11:10:06.344139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.709 [2024-11-20 11:10:06.357354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.709 [2024-11-20 11:10:06.357369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.709 [2024-11-20 11:10:06.369982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.709 [2024-11-20 11:10:06.369997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.709 [2024-11-20 11:10:06.383282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.709 [2024-11-20 11:10:06.383297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.709 19194.00 IOPS, 149.95 MiB/s [2024-11-20T10:10:06.451Z] [2024-11-20 11:10:06.396692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.709 [2024-11-20 11:10:06.396706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.709 [2024-11-20 11:10:06.410176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.709 [2024-11-20 11:10:06.410191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.709 [2024-11-20 11:10:06.422820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.709 [2024-11-20 11:10:06.422834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.709 [2024-11-20 11:10:06.435943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.709 [2024-11-20 11:10:06.435957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.971 [2024-11-20 11:10:06.449513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.971 [2024-11-20 11:10:06.449528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.971 [2024-11-20 11:10:06.462976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.971 [2024-11-20 11:10:06.462991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.971 [2024-11-20 11:10:06.476499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.971 [2024-11-20 11:10:06.476513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.971 [2024-11-20 11:10:06.489522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.971 [2024-11-20 11:10:06.489536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.971 [2024-11-20 11:10:06.502725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.971 [2024-11-20 11:10:06.502740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.971 [2024-11-20 11:10:06.515642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.971 [2024-11-20 11:10:06.515657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.971 [2024-11-20 11:10:06.528788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.971 [2024-11-20 11:10:06.528803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.971 [2024-11-20 11:10:06.542310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.971 [2024-11-20 11:10:06.542324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.971 [2024-11-20 11:10:06.555990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.971 [2024-11-20 11:10:06.556005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.971 [2024-11-20 11:10:06.568429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.971 [2024-11-20 11:10:06.568443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.971 [2024-11-20 11:10:06.581829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.971 [2024-11-20 11:10:06.581844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.971 [2024-11-20 11:10:06.594522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.971 [2024-11-20 11:10:06.594537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.971 [2024-11-20 11:10:06.607731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.971 [2024-11-20 11:10:06.607747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.971 [2024-11-20 11:10:06.620624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.971 [2024-11-20 11:10:06.620638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.971 [2024-11-20 11:10:06.633984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.971 [2024-11-20 11:10:06.633999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.971 [2024-11-20 11:10:06.647216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.971 [2024-11-20 11:10:06.647232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.971 [2024-11-20 11:10:06.659774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.971 [2024-11-20 11:10:06.659789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.971 [2024-11-20 11:10:06.673074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.971 [2024-11-20 11:10:06.673088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.971 [2024-11-20 11:10:06.686557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.971 [2024-11-20 11:10:06.686573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.971 [2024-11-20 11:10:06.699704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.971 [2024-11-20 11:10:06.699719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.232 [2024-11-20 11:10:06.713260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.232 [2024-11-20 11:10:06.713275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.232 [2024-11-20 11:10:06.726670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.232 [2024-11-20 11:10:06.726685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.232 [2024-11-20 11:10:06.739595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.232 [2024-11-20 11:10:06.739610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.232 [2024-11-20 11:10:06.753250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.232 [2024-11-20 11:10:06.753269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.232 [2024-11-20 11:10:06.765761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.232 [2024-11-20 11:10:06.765775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.232 [2024-11-20 11:10:06.778217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.232 [2024-11-20 11:10:06.778231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.232 [2024-11-20 11:10:06.791062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.232 [2024-11-20 11:10:06.791076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.232 [2024-11-20 11:10:06.804071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.232 [2024-11-20 11:10:06.804085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.232 [2024-11-20 11:10:06.816621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.232 [2024-11-20 11:10:06.816636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.232 [2024-11-20 11:10:06.829840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.232 [2024-11-20 11:10:06.829854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.232 [2024-11-20 11:10:06.843070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.232 [2024-11-20 11:10:06.843085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.232 [2024-11-20 11:10:06.855552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.232 [2024-11-20 11:10:06.855567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.232 [2024-11-20 11:10:06.868796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.232 [2024-11-20 11:10:06.868811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.232 [2024-11-20 11:10:06.882383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.232 [2024-11-20 11:10:06.882398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.232 [2024-11-20 11:10:06.895937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.232 [2024-11-20 11:10:06.895952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.232 [2024-11-20 11:10:06.909225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.232 [2024-11-20 11:10:06.909239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.232 [2024-11-20 11:10:06.922593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.232 [2024-11-20 11:10:06.922607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.232 [2024-11-20 11:10:06.936196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.232 [2024-11-20 11:10:06.936211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.232 [2024-11-20 11:10:06.948577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.232 [2024-11-20 11:10:06.948592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.232 [2024-11-20 11:10:06.961650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.232 [2024-11-20 11:10:06.961664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.493 [2024-11-20 11:10:06.975388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.493 [2024-11-20 11:10:06.975403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.493 [2024-11-20 11:10:06.987600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.493 [2024-11-20 11:10:06.987615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.493 [2024-11-20 11:10:07.001346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.493 [2024-11-20 11:10:07.001364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.493 [2024-11-20 11:10:07.013782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.493 [2024-11-20 11:10:07.013797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.493 [2024-11-20 11:10:07.026920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.493 [2024-11-20 11:10:07.026934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.493 [2024-11-20 11:10:07.040392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.493 [2024-11-20 11:10:07.040406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.493 [2024-11-20 11:10:07.053971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.493 [2024-11-20 11:10:07.053985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.493 [2024-11-20 11:10:07.067094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.493 [2024-11-20 11:10:07.067109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.493 [2024-11-20 11:10:07.080424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.493 [2024-11-20 11:10:07.080439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.493 [2024-11-20 11:10:07.093889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.493 [2024-11-20 11:10:07.093904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.493 [2024-11-20 11:10:07.107344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.493 [2024-11-20 11:10:07.107358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.493 [2024-11-20 11:10:07.120918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.493 [2024-11-20 11:10:07.120932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.493 [2024-11-20 11:10:07.134203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.493 [2024-11-20 11:10:07.134217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.493 [2024-11-20 11:10:07.146993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.493 [2024-11-20 11:10:07.147007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.493 [2024-11-20 11:10:07.160438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.493 [2024-11-20 11:10:07.160453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.493 [2024-11-20 11:10:07.174009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.493 [2024-11-20 11:10:07.174024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.493 [2024-11-20 11:10:07.187134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.493 [2024-11-20 11:10:07.187149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.493 [2024-11-20 11:10:07.200424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.493 [2024-11-20 11:10:07.200439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.493 [2024-11-20 11:10:07.213327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.493 [2024-11-20 11:10:07.213342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.493 [2024-11-20 11:10:07.227092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.493 [2024-11-20 11:10:07.227106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.753 [2024-11-20 11:10:07.240593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.754 [2024-11-20 11:10:07.240607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.754 [2024-11-20 11:10:07.253827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.754 [2024-11-20 11:10:07.253845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.754 [2024-11-20 11:10:07.267028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.754 [2024-11-20 11:10:07.267043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.754 [2024-11-20 11:10:07.279679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.754 [2024-11-20 11:10:07.279694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.754 [2024-11-20 11:10:07.292314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.754 [2024-11-20 11:10:07.292328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.754 [2024-11-20 11:10:07.305011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.754 [2024-11-20 11:10:07.305026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.754 [2024-11-20 11:10:07.317858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.754 [2024-11-20 11:10:07.317873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.754 [2024-11-20 11:10:07.330433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.754 [2024-11-20 11:10:07.330448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.754 [2024-11-20 11:10:07.343755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.754 [2024-11-20 11:10:07.343770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.754 [2024-11-20 11:10:07.356131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.754 [2024-11-20 11:10:07.356145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.754 [2024-11-20 11:10:07.369043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.754 [2024-11-20 11:10:07.369057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.754 [2024-11-20 11:10:07.381704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.754 [2024-11-20 11:10:07.381718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.754 19227.67 IOPS, 150.22 MiB/s [2024-11-20T10:10:07.496Z] [2024-11-20 11:10:07.394929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.754 [2024-11-20 11:10:07.394944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.754 [2024-11-20 11:10:07.407508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.754 [2024-11-20 11:10:07.407523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.754 [2024-11-20 11:10:07.420146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.754 [2024-11-20 11:10:07.420164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.754 [2024-11-20 11:10:07.433826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.754 [2024-11-20 11:10:07.433841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.754 [2024-11-20 11:10:07.446826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.754 [2024-11-20 11:10:07.446840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.754 [2024-11-20 11:10:07.460400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.754 [2024-11-20 11:10:07.460415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.754 [2024-11-20 11:10:07.473583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.754 [2024-11-20 11:10:07.473597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.754 [2024-11-20 11:10:07.486923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.754 [2024-11-20 11:10:07.486938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.013 [2024-11-20 11:10:07.500402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.013 [2024-11-20 11:10:07.500421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.014 [2024-11-20 11:10:07.513522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.014 [2024-11-20 11:10:07.513537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.014 [2024-11-20 11:10:07.525910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.014 [2024-11-20 11:10:07.525924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.014 [2024-11-20 11:10:07.538562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.014 [2024-11-20 11:10:07.538576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.014 [2024-11-20 11:10:07.551454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.014 [2024-11-20 11:10:07.551469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.014 [2024-11-20 11:10:07.564600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.014 [2024-11-20 11:10:07.564615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.014 [2024-11-20 11:10:07.577534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.014 [2024-11-20 11:10:07.577549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.014 [2024-11-20 11:10:07.590932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.014 [2024-11-20 11:10:07.590946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.014 [2024-11-20 11:10:07.603763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.014 [2024-11-20 11:10:07.603778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.014 [2024-11-20 11:10:07.617196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.014 [2024-11-20 11:10:07.617210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.014 [2024-11-20 11:10:07.630530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.014 [2024-11-20 11:10:07.630544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.014 [2024-11-20 11:10:07.643086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.014 [2024-11-20 11:10:07.643100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.014 [2024-11-20 11:10:07.655406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.014 [2024-11-20 11:10:07.655421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.014 [2024-11-20 11:10:07.668860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.014 [2024-11-20 11:10:07.668875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.014 [2024-11-20 11:10:07.681599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.014 [2024-11-20 11:10:07.681614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.014 [2024-11-20 11:10:07.694858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.014 [2024-11-20 11:10:07.694872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.014 [2024-11-20 11:10:07.707128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.014 [2024-11-20 11:10:07.707142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.014 [2024-11-20 11:10:07.719970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.014 [2024-11-20 11:10:07.719985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.014 [2024-11-20 11:10:07.732350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.014 [2024-11-20 11:10:07.732364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.014 [2024-11-20 11:10:07.745680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.014 [2024-11-20 11:10:07.745694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.274 [2024-11-20 11:10:07.758096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.274 [2024-11-20 11:10:07.758110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.274 [2024-11-20 11:10:07.770463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.274 [2024-11-20 11:10:07.770478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.274 [2024-11-20 11:10:07.783591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.274 [2024-11-20 11:10:07.783605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.274 [2024-11-20 11:10:07.797097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.274 [2024-11-20 11:10:07.797112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.274 [2024-11-20 11:10:07.809767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.274 [2024-11-20 11:10:07.809782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.274 [2024-11-20 11:10:07.822813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.274 [2024-11-20 11:10:07.822827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.274 [2024-11-20 11:10:07.836261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.274 [2024-11-20 11:10:07.836276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.274 [2024-11-20 11:10:07.849282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.274 [2024-11-20 11:10:07.849296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.274 [2024-11-20 11:10:07.862384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.274 [2024-11-20 11:10:07.862398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.274 [2024-11-20 11:10:07.875616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.274 [2024-11-20 11:10:07.875631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.274 [2024-11-20 11:10:07.888872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.274 [2024-11-20 11:10:07.888887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.274 [2024-11-20 11:10:07.901674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.274 [2024-11-20 11:10:07.901688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.274 [2024-11-20 11:10:07.915114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.274 [2024-11-20 11:10:07.915128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.274 [2024-11-20 11:10:07.927404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.274 [2024-11-20 11:10:07.927418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.274 [2024-11-20 11:10:07.940723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.274 [2024-11-20 11:10:07.940739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.274 [2024-11-20 11:10:07.953806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.274 [2024-11-20 11:10:07.953821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.274 [2024-11-20 11:10:07.967335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.274 [2024-11-20 11:10:07.967351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.274 [2024-11-20 11:10:07.980560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.274 [2024-11-20 11:10:07.980575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.274 [2024-11-20 11:10:07.994021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.274 [2024-11-20 11:10:07.994037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.274 [2024-11-20 11:10:08.007050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.274 [2024-11-20 11:10:08.007065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.535 [2024-11-20 11:10:08.020296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.535 [2024-11-20 11:10:08.020311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.535 [2024-11-20 11:10:08.033556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.535 [2024-11-20 11:10:08.033571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.535 [2024-11-20 11:10:08.046573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.535 [2024-11-20 11:10:08.046588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.535 [2024-11-20 11:10:08.060210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.535 [2024-11-20 11:10:08.060224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.535 [2024-11-20 11:10:08.073023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.535 [2024-11-20 11:10:08.073037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.535 [2024-11-20 11:10:08.086364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.535 [2024-11-20 11:10:08.086379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.535 [2024-11-20 11:10:08.099780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.535 [2024-11-20 11:10:08.099795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.535 [2024-11-20 11:10:08.112152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.535 [2024-11-20 11:10:08.112171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.535 [2024-11-20 11:10:08.126006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.535 [2024-11-20 11:10:08.126020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.535 [2024-11-20 11:10:08.139365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.535 [2024-11-20 11:10:08.139380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.535 [2024-11-20 11:10:08.151952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.535 [2024-11-20 11:10:08.151966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.535 [2024-11-20 11:10:08.164811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.535 [2024-11-20 11:10:08.164825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.535 [2024-11-20 11:10:08.177270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.535 [2024-11-20 11:10:08.177284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.535 [2024-11-20 11:10:08.190279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.535 [2024-11-20 11:10:08.190293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.535 [2024-11-20 11:10:08.203391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.535 [2024-11-20 11:10:08.203405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.535 [2024-11-20 11:10:08.216349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.535 [2024-11-20 11:10:08.216364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.535 [2024-11-20 11:10:08.228517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.535 [2024-11-20 11:10:08.228532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.535 [2024-11-20 11:10:08.241984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.535 [2024-11-20 11:10:08.241999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.535 [2024-11-20 11:10:08.255520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.535 [2024-11-20 11:10:08.255535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.535 [2024-11-20 11:10:08.268727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.535 [2024-11-20 11:10:08.268741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.796 [2024-11-20 11:10:08.281838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.796 [2024-11-20 11:10:08.281854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.796 [2024-11-20 11:10:08.294221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.796 [2024-11-20 11:10:08.294236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.796 [2024-11-20 11:10:08.307776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.796 [2024-11-20 11:10:08.307790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.796 [2024-11-20 11:10:08.321187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.796 [2024-11-20 11:10:08.321201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.796 [2024-11-20 11:10:08.334451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.796 [2024-11-20 11:10:08.334466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.796 [2024-11-20 11:10:08.347867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.796 [2024-11-20 11:10:08.347881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.796 [2024-11-20 11:10:08.360571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.796 [2024-11-20 11:10:08.360585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.796 [2024-11-20 11:10:08.373655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.796 [2024-11-20 11:10:08.373669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.796 [2024-11-20 11:10:08.386284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.796 [2024-11-20 11:10:08.386299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.796 19242.00 IOPS, 150.33 MiB/s [2024-11-20T10:10:08.538Z] [2024-11-20 11:10:08.400096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.796 [2024-11-20 11:10:08.400111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.796 [2024-11-20 11:10:08.413918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.796 [2024-11-20 11:10:08.413933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.796 [2024-11-20 11:10:08.427723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.796 [2024-11-20 11:10:08.427738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.796 [2024-11-20 11:10:08.441184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.796 [2024-11-20 11:10:08.441199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.796 [2024-11-20 11:10:08.453842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.796 [2024-11-20 11:10:08.453857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.796 [2024-11-20 11:10:08.466339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.796 [2024-11-20 11:10:08.466355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.796 [2024-11-20 11:10:08.479730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.796 [2024-11-20 11:10:08.479750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.796 [2024-11-20 11:10:08.493094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.796 [2024-11-20 11:10:08.493110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.796 [2024-11-20 11:10:08.505652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.796 [2024-11-20 11:10:08.505667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.796 [2024-11-20 11:10:08.519451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.796 [2024-11-20 11:10:08.519466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.796 [2024-11-20 11:10:08.532110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.796 [2024-11-20 11:10:08.532125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.056 [2024-11-20 11:10:08.545309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.056 [2024-11-20 11:10:08.545325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.056 [2024-11-20 11:10:08.557685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.056 [2024-11-20 11:10:08.557701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.056 [2024-11-20 11:10:08.570606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.056 [2024-11-20 11:10:08.570622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.056 [2024-11-20 11:10:08.583300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.056 [2024-11-20 11:10:08.583316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.056 [2024-11-20 11:10:08.597088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.056 [2024-11-20 11:10:08.597103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.056 [2024-11-20 11:10:08.609807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.057 [2024-11-20 11:10:08.609822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.057 [2024-11-20 11:10:08.623171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.057 [2024-11-20 11:10:08.623186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.057 [2024-11-20 11:10:08.636359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.057 [2024-11-20 11:10:08.636375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.057 [2024-11-20 11:10:08.649255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.057 [2024-11-20 11:10:08.649270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.057 [2024-11-20 11:10:08.663099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.057 [2024-11-20 11:10:08.663115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.057 [2024-11-20 11:10:08.676312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.057 [2024-11-20 11:10:08.676328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.057 [2024-11-20 11:10:08.689515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.057 [2024-11-20 11:10:08.689530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.057 [2024-11-20 11:10:08.702774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.057 [2024-11-20 11:10:08.702789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.057 [2024-11-20 11:10:08.715610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.057 [2024-11-20 11:10:08.715625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.057 [2024-11-20 11:10:08.727950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.057 [2024-11-20 11:10:08.727971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.057 [2024-11-20 11:10:08.740700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.057 [2024-11-20 11:10:08.740715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.057 [2024-11-20 11:10:08.753917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.057 [2024-11-20 11:10:08.753932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.057 [2024-11-20 11:10:08.767220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.057 [2024-11-20 11:10:08.767235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.057 [2024-11-20 11:10:08.780441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.057 [2024-11-20 11:10:08.780456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.057 [2024-11-20 11:10:08.793764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.057 [2024-11-20 11:10:08.793778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.318 [2024-11-20 11:10:08.806960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.318 [2024-11-20 11:10:08.806975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.318 [2024-11-20 11:10:08.819827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.318 [2024-11-20 11:10:08.819841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.318 [2024-11-20 11:10:08.832169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.318 [2024-11-20 11:10:08.832183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.318 [2024-11-20 11:10:08.845144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.318 [2024-11-20 11:10:08.845162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.318 [2024-11-20 11:10:08.857672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.318 [2024-11-20 11:10:08.857687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.318 [2024-11-20 11:10:08.870931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.318 [2024-11-20 11:10:08.870946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.318 [2024-11-20 11:10:08.884407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.318 [2024-11-20 11:10:08.884423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.318 [2024-11-20 11:10:08.897982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.318 [2024-11-20 11:10:08.897998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.318 [2024-11-20 11:10:08.910715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.318 [2024-11-20 11:10:08.910730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.318 [2024-11-20 11:10:08.924128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.318 [2024-11-20 11:10:08.924144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.318 [2024-11-20 11:10:08.936791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.318 [2024-11-20 11:10:08.936806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.318 [2024-11-20 11:10:08.949001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.318 [2024-11-20 11:10:08.949016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.318 [2024-11-20 11:10:08.962644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.318 [2024-11-20 11:10:08.962659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.318 [2024-11-20 11:10:08.975105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.318 [2024-11-20 11:10:08.975123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.318 [2024-11-20 11:10:08.988164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.318 [2024-11-20 11:10:08.988179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.318 [2024-11-20 11:10:09.001358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.318 [2024-11-20 11:10:09.001380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.318 [2024-11-20 11:10:09.014779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.318 [2024-11-20 11:10:09.014794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.318 [2024-11-20 11:10:09.028035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.318 [2024-11-20 11:10:09.028050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.318 [2024-11-20 11:10:09.041043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.318 [2024-11-20 11:10:09.041058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.318 [2024-11-20 11:10:09.054036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.318 [2024-11-20 11:10:09.054051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.579 [2024-11-20 11:10:09.067503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.579 [2024-11-20 11:10:09.067518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.579 [2024-11-20 11:10:09.080418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.579 [2024-11-20 11:10:09.080433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.579 [2024-11-20 11:10:09.093828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.579 [2024-11-20 11:10:09.093843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.579 [2024-11-20 11:10:09.107212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.579 [2024-11-20 11:10:09.107227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.579 [2024-11-20 11:10:09.119904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.579 [2024-11-20 11:10:09.119918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.579 [2024-11-20 11:10:09.132118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.579 [2024-11-20 11:10:09.132133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.579 [2024-11-20 11:10:09.145761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.579 [2024-11-20 11:10:09.145775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.579 [2024-11-20 11:10:09.158579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.579 [2024-11-20 11:10:09.158594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.579 [2024-11-20 11:10:09.172111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.579 [2024-11-20 11:10:09.172126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.579 [2024-11-20 11:10:09.184735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.579 [2024-11-20 11:10:09.184749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.579 [2024-11-20 11:10:09.197946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.579 [2024-11-20 11:10:09.197961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.579 [2024-11-20 11:10:09.211271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.579 [2024-11-20 11:10:09.211286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.579 [2024-11-20 11:10:09.224314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.579 [2024-11-20 11:10:09.224329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.579 [2024-11-20 11:10:09.237650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.579 [2024-11-20 11:10:09.237665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.579 [2024-11-20 11:10:09.251203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.579 [2024-11-20 11:10:09.251218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.579 [2024-11-20 11:10:09.264500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.579 [2024-11-20 11:10:09.264515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.579 [2024-11-20 11:10:09.277429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.579 [2024-11-20 11:10:09.277443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.579 [2024-11-20 11:10:09.290935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.579 [2024-11-20 11:10:09.290950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.579 [2024-11-20 11:10:09.304317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.579 [2024-11-20 11:10:09.304333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.579 [2024-11-20 11:10:09.316984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.579 [2024-11-20 11:10:09.316998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.839 [2024-11-20 11:10:09.330564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.839 [2024-11-20 11:10:09.330579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.839 [2024-11-20 11:10:09.343620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.839 [2024-11-20 11:10:09.343635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.839 [2024-11-20 11:10:09.356042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.839 [2024-11-20 11:10:09.356056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.839 [2024-11-20 11:10:09.368416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.839 [2024-11-20 11:10:09.368431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.839 [2024-11-20 11:10:09.381886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.839 [2024-11-20 11:10:09.381900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.839 19249.20 IOPS, 150.38 MiB/s [2024-11-20T10:10:09.581Z] [2024-11-20 11:10:09.395592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.839 [2024-11-20 11:10:09.395606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.839 00:09:16.839 Latency(us) 00:09:16.839 [2024-11-20T10:10:09.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:16.839 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:16.839 Nvme1n1 : 5.01 19252.88 150.41 0.00 0.00 6642.31 3140.27 16711.68 00:09:16.839 [2024-11-20T10:10:09.581Z] =================================================================================================================== 00:09:16.839 [2024-11-20T10:10:09.581Z] Total : 19252.88 150.41 0.00 0.00 6642.31 3140.27 16711.68 00:09:16.839 [2024-11-20 11:10:09.404573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.839 [2024-11-20 11:10:09.404588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.839 [2024-11-20 11:10:09.416605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.839 [2024-11-20 11:10:09.416617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.839 [2024-11-20 11:10:09.428637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.839 [2024-11-20 11:10:09.428651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.839 [2024-11-20 11:10:09.440668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.839 [2024-11-20 11:10:09.440683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.839 [2024-11-20 11:10:09.452694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.839 [2024-11-20 11:10:09.452705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.839 [2024-11-20 11:10:09.464721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.839 [2024-11-20 11:10:09.464730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.839 [2024-11-20 11:10:09.476755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.839 [2024-11-20 11:10:09.476763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.839 [2024-11-20 11:10:09.488786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.839 [2024-11-20 11:10:09.488795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.839 [2024-11-20 11:10:09.500814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.839 [2024-11-20 11:10:09.500822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2579348) - No such process 00:09:16.839 11:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2579348 00:09:16.839 11:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.839 11:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.839 11:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:16.839 11:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.839 11:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:16.839 11:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.839 11:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:16.839 delay0 00:09:16.839 11:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.839 11:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:16.839 11:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.839 11:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:16.839 11:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.839 11:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:17.100 [2024-11-20 11:10:09.666349] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:25.236 Initializing NVMe Controllers 00:09:25.237 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:25.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:25.237 Initialization complete. Launching workers. 00:09:25.237 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 242, failed: 32697 00:09:25.237 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 32816, failed to submit 123 00:09:25.237 success 32723, unsuccessful 93, failed 0 00:09:25.237 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:25.237 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:25.237 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:25.237 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:25.237 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:25.237 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:25.237 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:25.237 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:25.237 rmmod nvme_tcp 00:09:25.237 rmmod nvme_fabrics 00:09:25.237 rmmod nvme_keyring 00:09:25.237 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:25.237 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:25.237 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:25.237 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2576975 ']' 00:09:25.237 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2576975 00:09:25.237 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2576975 ']' 00:09:25.237 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2576975 00:09:25.237 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:25.237 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.237 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2576975 00:09:25.237 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:25.237 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:25.237 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2576975' 00:09:25.237 killing process with pid 2576975 00:09:25.237 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2576975 00:09:25.237 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2576975 00:09:25.237 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:25.237 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:25.237 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:25.237 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:25.237 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:25.237 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:25.237 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:25.237 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:25.237 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:25.237 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.237 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.237 11:10:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:26.631 00:09:26.631 real 0m34.377s 00:09:26.631 user 0m45.268s 00:09:26.631 sys 0m11.664s 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:26.631 ************************************ 00:09:26.631 END TEST nvmf_zcopy 00:09:26.631 ************************************ 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:26.631 ************************************ 00:09:26.631 START TEST nvmf_nmic 00:09:26.631 ************************************ 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:26.631 * Looking for test storage... 00:09:26.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:26.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.631 --rc genhtml_branch_coverage=1 00:09:26.631 --rc genhtml_function_coverage=1 00:09:26.631 --rc genhtml_legend=1 00:09:26.631 --rc geninfo_all_blocks=1 00:09:26.631 --rc geninfo_unexecuted_blocks=1 00:09:26.631 00:09:26.631 ' 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:26.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.631 --rc genhtml_branch_coverage=1 00:09:26.631 --rc genhtml_function_coverage=1 00:09:26.631 --rc genhtml_legend=1 00:09:26.631 --rc geninfo_all_blocks=1 00:09:26.631 --rc geninfo_unexecuted_blocks=1 00:09:26.631 00:09:26.631 ' 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:26.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.631 --rc genhtml_branch_coverage=1 00:09:26.631 --rc genhtml_function_coverage=1 00:09:26.631 --rc genhtml_legend=1 00:09:26.631 --rc geninfo_all_blocks=1 00:09:26.631 --rc geninfo_unexecuted_blocks=1 00:09:26.631 00:09:26.631 ' 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:26.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.631 --rc genhtml_branch_coverage=1 00:09:26.631 --rc genhtml_function_coverage=1 00:09:26.631 --rc genhtml_legend=1 00:09:26.631 --rc geninfo_all_blocks=1 00:09:26.631 --rc geninfo_unexecuted_blocks=1 00:09:26.631 00:09:26.631 ' 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.631 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.632 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.632 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:26.632 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.632 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:26.632 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:26.632 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:26.632 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:26.632 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:26.632 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:26.632 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:26.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:26.632 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:26.632 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:26.632 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:26.632 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:26.632 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:26.632 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:26.632 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:26.632 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:26.632 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:26.632 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:26.632 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:26.632 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.632 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.632 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.632 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:26.632 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:26.632 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:26.632 11:10:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:34.776 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:34.776 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:34.776 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:34.776 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:34.776 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:34.777 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:34.777 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:09:34.777 00:09:34.777 --- 10.0.0.2 ping statistics --- 00:09:34.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.777 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:34.777 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:34.777 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:09:34.777 00:09:34.777 --- 10.0.0.1 ping statistics --- 00:09:34.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.777 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2586040 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2586040 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2586040 ']' 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.777 11:10:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:34.777 [2024-11-20 11:10:26.928907] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:09:34.777 [2024-11-20 11:10:26.928972] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.777 [2024-11-20 11:10:27.030646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:34.777 [2024-11-20 11:10:27.085060] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.777 [2024-11-20 11:10:27.085116] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.777 [2024-11-20 11:10:27.085125] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.777 [2024-11-20 11:10:27.085133] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.777 [2024-11-20 11:10:27.085139] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.777 [2024-11-20 11:10:27.087488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.777 [2024-11-20 11:10:27.087653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.777 [2024-11-20 11:10:27.087821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.777 [2024-11-20 11:10:27.087822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:35.039 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:35.039 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:35.039 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:35.039 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:35.039 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:35.301 [2024-11-20 11:10:27.811950] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:35.301 Malloc0 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:35.301 [2024-11-20 11:10:27.887303] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:35.301 test case1: single bdev can't be used in multiple subsystems 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:35.301 [2024-11-20 11:10:27.923124] bdev.c:8203:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:35.301 [2024-11-20 11:10:27.923149] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:35.301 [2024-11-20 11:10:27.923165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.301 request: 00:09:35.301 { 00:09:35.301 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:35.301 "namespace": { 00:09:35.301 "bdev_name": "Malloc0", 00:09:35.301 "no_auto_visible": false 00:09:35.301 }, 00:09:35.301 "method": "nvmf_subsystem_add_ns", 00:09:35.301 "req_id": 1 00:09:35.301 } 00:09:35.301 Got JSON-RPC error response 00:09:35.301 response: 00:09:35.301 { 00:09:35.301 "code": -32602, 00:09:35.301 "message": "Invalid parameters" 00:09:35.301 } 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:35.301 Adding namespace failed - expected result. 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:35.301 test case2: host connect to nvmf target in multiple paths 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:35.301 [2024-11-20 11:10:27.935326] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.301 11:10:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:37.216 11:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:38.603 11:10:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:38.603 11:10:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:38.603 11:10:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:38.603 11:10:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:38.603 11:10:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:40.515 11:10:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:40.515 11:10:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:40.515 11:10:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:40.515 11:10:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:40.515 11:10:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:40.515 11:10:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:40.515 11:10:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:40.515 [global] 00:09:40.515 thread=1 00:09:40.515 invalidate=1 00:09:40.515 rw=write 00:09:40.515 time_based=1 00:09:40.515 runtime=1 00:09:40.515 ioengine=libaio 00:09:40.515 direct=1 00:09:40.515 bs=4096 00:09:40.515 iodepth=1 00:09:40.515 norandommap=0 00:09:40.515 numjobs=1 00:09:40.515 00:09:40.515 verify_dump=1 00:09:40.515 verify_backlog=512 00:09:40.515 verify_state_save=0 00:09:40.515 do_verify=1 00:09:40.515 verify=crc32c-intel 00:09:40.515 [job0] 00:09:40.515 filename=/dev/nvme0n1 00:09:40.515 Could not set queue depth (nvme0n1) 00:09:40.775 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:40.775 fio-3.35 00:09:40.775 Starting 1 thread 00:09:42.158 00:09:42.158 job0: (groupid=0, jobs=1): err= 0: pid=2587582: Wed Nov 20 11:10:34 2024 00:09:42.158 read: IOPS=616, BW=2466KiB/s (2525kB/s)(2468KiB/1001msec) 00:09:42.158 slat (nsec): min=7225, max=63220, avg=23922.02, stdev=7761.54 00:09:42.158 clat (usec): min=523, max=1336, avg=771.38, stdev=88.70 00:09:42.158 lat (usec): min=532, max=1363, avg=795.30, stdev=89.79 00:09:42.158 clat percentiles (usec): 00:09:42.158 | 1.00th=[ 586], 5.00th=[ 635], 10.00th=[ 660], 20.00th=[ 701], 00:09:42.158 | 30.00th=[ 734], 40.00th=[ 758], 50.00th=[ 775], 60.00th=[ 783], 00:09:42.158 | 70.00th=[ 799], 80.00th=[ 832], 90.00th=[ 881], 95.00th=[ 914], 00:09:42.158 | 99.00th=[ 1074], 99.50th=[ 1106], 99.90th=[ 1336], 99.95th=[ 1336], 00:09:42.158 | 99.99th=[ 1336] 00:09:42.158 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:42.158 slat (usec): min=10, max=29642, avg=57.39, stdev=925.51 00:09:42.158 clat (usec): min=180, max=1022, avg=429.54, stdev=80.00 00:09:42.158 lat (usec): min=215, max=29989, avg=486.93, stdev=926.69 00:09:42.158 clat percentiles (usec): 00:09:42.158 | 1.00th=[ 247], 5.00th=[ 293], 10.00th=[ 326], 20.00th=[ 359], 00:09:42.158 | 30.00th=[ 388], 40.00th=[ 420], 50.00th=[ 441], 60.00th=[ 461], 00:09:42.158 | 70.00th=[ 474], 80.00th=[ 486], 90.00th=[ 506], 95.00th=[ 529], 00:09:42.158 | 99.00th=[ 627], 99.50th=[ 701], 99.90th=[ 914], 99.95th=[ 1020], 00:09:42.158 | 99.99th=[ 1020] 00:09:42.158 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:42.158 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:42.158 lat (usec) : 250=0.79%, 500=53.69%, 750=20.23%, 1000=24.80% 00:09:42.159 lat (msec) : 2=0.49% 00:09:42.159 cpu : usr=2.00%, sys=4.80%, ctx=1644, majf=0, minf=1 00:09:42.159 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.159 issued rwts: total=617,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.159 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.159 00:09:42.159 Run status group 0 (all jobs): 00:09:42.159 READ: bw=2466KiB/s (2525kB/s), 2466KiB/s-2466KiB/s (2525kB/s-2525kB/s), io=2468KiB (2527kB), run=1001-1001msec 00:09:42.159 WRITE: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:09:42.159 00:09:42.159 Disk stats (read/write): 00:09:42.159 nvme0n1: ios=564/990, merge=0/0, ticks=989/404, in_queue=1393, util=98.80% 00:09:42.159 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:42.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:42.159 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:42.159 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:42.159 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:42.159 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:42.159 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:42.159 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:42.159 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:42.159 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:42.159 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:42.159 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:42.159 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:42.159 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:42.159 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:42.159 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:42.159 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:42.159 rmmod nvme_tcp 00:09:42.159 rmmod nvme_fabrics 00:09:42.159 rmmod nvme_keyring 00:09:42.159 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:42.159 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:42.159 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:42.159 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2586040 ']' 00:09:42.159 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2586040 00:09:42.159 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2586040 ']' 00:09:42.159 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2586040 00:09:42.159 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:42.159 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.159 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2586040 00:09:42.419 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:42.419 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:42.419 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2586040' 00:09:42.419 killing process with pid 2586040 00:09:42.419 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2586040 00:09:42.419 11:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2586040 00:09:42.419 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:42.419 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:42.419 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:42.419 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:42.419 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:42.419 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:42.419 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:42.419 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:42.419 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:42.419 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.419 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.419 11:10:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:44.964 00:09:44.964 real 0m18.020s 00:09:44.964 user 0m50.116s 00:09:44.964 sys 0m6.734s 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:44.964 ************************************ 00:09:44.964 END TEST nvmf_nmic 00:09:44.964 ************************************ 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:44.964 ************************************ 00:09:44.964 START TEST nvmf_fio_target 00:09:44.964 ************************************ 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:44.964 * Looking for test storage... 00:09:44.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:44.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.964 --rc genhtml_branch_coverage=1 00:09:44.964 --rc genhtml_function_coverage=1 00:09:44.964 --rc genhtml_legend=1 00:09:44.964 --rc geninfo_all_blocks=1 00:09:44.964 --rc geninfo_unexecuted_blocks=1 00:09:44.964 00:09:44.964 ' 00:09:44.964 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:44.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.964 --rc genhtml_branch_coverage=1 00:09:44.964 --rc genhtml_function_coverage=1 00:09:44.964 --rc genhtml_legend=1 00:09:44.965 --rc geninfo_all_blocks=1 00:09:44.965 --rc geninfo_unexecuted_blocks=1 00:09:44.965 00:09:44.965 ' 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:44.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.965 --rc genhtml_branch_coverage=1 00:09:44.965 --rc genhtml_function_coverage=1 00:09:44.965 --rc genhtml_legend=1 00:09:44.965 --rc geninfo_all_blocks=1 00:09:44.965 --rc geninfo_unexecuted_blocks=1 00:09:44.965 00:09:44.965 ' 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:44.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.965 --rc genhtml_branch_coverage=1 00:09:44.965 --rc genhtml_function_coverage=1 00:09:44.965 --rc genhtml_legend=1 00:09:44.965 --rc geninfo_all_blocks=1 00:09:44.965 --rc geninfo_unexecuted_blocks=1 00:09:44.965 00:09:44.965 ' 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:44.965 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:44.965 11:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.103 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:53.103 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:53.103 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:53.103 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:53.103 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:53.103 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:53.104 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:53.104 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:53.104 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:53.104 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:53.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:53.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.506 ms 00:09:53.104 00:09:53.104 --- 10.0.0.2 ping statistics --- 00:09:53.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.104 rtt min/avg/max/mdev = 0.506/0.506/0.506/0.000 ms 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:53.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:53.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:09:53.104 00:09:53.104 --- 10.0.0.1 ping statistics --- 00:09:53.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.104 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:53.104 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:53.105 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:53.105 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:53.105 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:53.105 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:53.105 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:53.105 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:53.105 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.105 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2592200 00:09:53.105 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:53.105 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2592200 00:09:53.105 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2592200 ']' 00:09:53.105 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.105 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:53.105 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.105 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:53.105 11:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.105 [2024-11-20 11:10:44.991625] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:09:53.105 [2024-11-20 11:10:44.991691] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.105 [2024-11-20 11:10:45.092496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:53.105 [2024-11-20 11:10:45.145223] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.105 [2024-11-20 11:10:45.145276] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.105 [2024-11-20 11:10:45.145284] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.105 [2024-11-20 11:10:45.145291] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.105 [2024-11-20 11:10:45.145297] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.105 [2024-11-20 11:10:45.147269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.105 [2024-11-20 11:10:45.147430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:53.105 [2024-11-20 11:10:45.147593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.105 [2024-11-20 11:10:45.147593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:53.105 11:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:53.105 11:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:53.105 11:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:53.105 11:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:53.105 11:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.366 11:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:53.366 11:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:53.366 [2024-11-20 11:10:46.031872] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:53.366 11:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:53.627 11:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:53.627 11:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:53.888 11:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:53.888 11:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:54.149 11:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:54.149 11:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:54.410 11:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:54.410 11:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:54.410 11:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:54.670 11:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:54.670 11:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:54.930 11:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:54.930 11:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:55.190 11:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:55.190 11:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:55.190 11:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:55.452 11:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:55.452 11:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:55.713 11:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:55.713 11:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:55.973 11:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:55.973 [2024-11-20 11:10:48.607120] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:55.973 11:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:56.234 11:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:56.494 11:10:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:57.879 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:57.879 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:57.879 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:57.879 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:57.879 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:57.879 11:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:00.420 11:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:00.420 11:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:00.420 11:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:00.420 11:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:00.420 11:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:00.420 11:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:00.420 11:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:00.420 [global] 00:10:00.420 thread=1 00:10:00.420 invalidate=1 00:10:00.420 rw=write 00:10:00.420 time_based=1 00:10:00.420 runtime=1 00:10:00.420 ioengine=libaio 00:10:00.420 direct=1 00:10:00.420 bs=4096 00:10:00.420 iodepth=1 00:10:00.420 norandommap=0 00:10:00.420 numjobs=1 00:10:00.420 00:10:00.420 verify_dump=1 00:10:00.420 verify_backlog=512 00:10:00.420 verify_state_save=0 00:10:00.420 do_verify=1 00:10:00.420 verify=crc32c-intel 00:10:00.420 [job0] 00:10:00.420 filename=/dev/nvme0n1 00:10:00.420 [job1] 00:10:00.420 filename=/dev/nvme0n2 00:10:00.420 [job2] 00:10:00.420 filename=/dev/nvme0n3 00:10:00.420 [job3] 00:10:00.420 filename=/dev/nvme0n4 00:10:00.420 Could not set queue depth (nvme0n1) 00:10:00.420 Could not set queue depth (nvme0n2) 00:10:00.420 Could not set queue depth (nvme0n3) 00:10:00.420 Could not set queue depth (nvme0n4) 00:10:00.420 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:00.420 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:00.420 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:00.420 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:00.420 fio-3.35 00:10:00.420 Starting 4 threads 00:10:01.808 00:10:01.808 job0: (groupid=0, jobs=1): err= 0: pid=2593898: Wed Nov 20 11:10:54 2024 00:10:01.808 read: IOPS=665, BW=2661KiB/s (2725kB/s)(2664KiB/1001msec) 00:10:01.808 slat (nsec): min=6736, max=56764, avg=23039.85, stdev=7874.72 00:10:01.808 clat (usec): min=293, max=1019, avg=760.89, stdev=85.29 00:10:01.808 lat (usec): min=307, max=1045, avg=783.93, stdev=87.41 00:10:01.808 clat percentiles (usec): 00:10:01.808 | 1.00th=[ 506], 5.00th=[ 611], 10.00th=[ 660], 20.00th=[ 701], 00:10:01.809 | 30.00th=[ 734], 40.00th=[ 758], 50.00th=[ 775], 60.00th=[ 791], 00:10:01.809 | 70.00th=[ 807], 80.00th=[ 824], 90.00th=[ 848], 95.00th=[ 865], 00:10:01.809 | 99.00th=[ 938], 99.50th=[ 971], 99.90th=[ 1020], 99.95th=[ 1020], 00:10:01.809 | 99.99th=[ 1020] 00:10:01.809 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:01.809 slat (nsec): min=9489, max=51371, avg=27701.61, stdev=10372.94 00:10:01.809 clat (usec): min=149, max=3530, avg=427.69, stdev=130.10 00:10:01.809 lat (usec): min=161, max=3564, avg=455.39, stdev=133.98 00:10:01.809 clat percentiles (usec): 00:10:01.809 | 1.00th=[ 241], 5.00th=[ 277], 10.00th=[ 297], 20.00th=[ 347], 00:10:01.809 | 30.00th=[ 379], 40.00th=[ 416], 50.00th=[ 437], 60.00th=[ 453], 00:10:01.809 | 70.00th=[ 474], 80.00th=[ 502], 90.00th=[ 529], 95.00th=[ 553], 00:10:01.809 | 99.00th=[ 611], 99.50th=[ 644], 99.90th=[ 824], 99.95th=[ 3523], 00:10:01.809 | 99.99th=[ 3523] 00:10:01.809 bw ( KiB/s): min= 4096, max= 4096, per=31.09%, avg=4096.00, stdev= 0.00, samples=1 00:10:01.809 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:01.809 lat (usec) : 250=0.77%, 500=48.11%, 750=26.09%, 1000=24.85% 00:10:01.809 lat (msec) : 2=0.12%, 4=0.06% 00:10:01.809 cpu : usr=2.30%, sys=4.50%, ctx=1690, majf=0, minf=2 00:10:01.809 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.809 issued rwts: total=666,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.809 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.809 job1: (groupid=0, jobs=1): err= 0: pid=2593914: Wed Nov 20 11:10:54 2024 00:10:01.809 read: IOPS=684, BW=2737KiB/s (2803kB/s)(2740KiB/1001msec) 00:10:01.809 slat (nsec): min=5979, max=46712, avg=23302.26, stdev=7611.33 00:10:01.809 clat (usec): min=403, max=1231, avg=754.17, stdev=122.65 00:10:01.809 lat (usec): min=429, max=1258, avg=777.47, stdev=124.72 00:10:01.809 clat percentiles (usec): 00:10:01.809 | 1.00th=[ 478], 5.00th=[ 553], 10.00th=[ 619], 20.00th=[ 660], 00:10:01.809 | 30.00th=[ 709], 40.00th=[ 734], 50.00th=[ 750], 60.00th=[ 766], 00:10:01.809 | 70.00th=[ 783], 80.00th=[ 816], 90.00th=[ 938], 95.00th=[ 1012], 00:10:01.809 | 99.00th=[ 1106], 99.50th=[ 1123], 99.90th=[ 1237], 99.95th=[ 1237], 00:10:01.809 | 99.99th=[ 1237] 00:10:01.809 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:01.809 slat (nsec): min=9764, max=66398, avg=27852.93, stdev=11392.16 00:10:01.809 clat (usec): min=175, max=622, avg=416.76, stdev=88.36 00:10:01.809 lat (usec): min=186, max=657, avg=444.61, stdev=96.39 00:10:01.809 clat percentiles (usec): 00:10:01.809 | 1.00th=[ 247], 5.00th=[ 273], 10.00th=[ 285], 20.00th=[ 314], 00:10:01.809 | 30.00th=[ 367], 40.00th=[ 416], 50.00th=[ 441], 60.00th=[ 461], 00:10:01.809 | 70.00th=[ 478], 80.00th=[ 494], 90.00th=[ 519], 95.00th=[ 537], 00:10:01.809 | 99.00th=[ 570], 99.50th=[ 578], 99.90th=[ 619], 99.95th=[ 627], 00:10:01.809 | 99.99th=[ 627] 00:10:01.809 bw ( KiB/s): min= 4096, max= 4096, per=31.09%, avg=4096.00, stdev= 0.00, samples=1 00:10:01.809 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:01.809 lat (usec) : 250=0.94%, 500=49.33%, 750=29.96%, 1000=17.55% 00:10:01.809 lat (msec) : 2=2.22% 00:10:01.809 cpu : usr=2.70%, sys=4.30%, ctx=1711, majf=0, minf=1 00:10:01.809 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.809 issued rwts: total=685,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.809 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.809 job2: (groupid=0, jobs=1): err= 0: pid=2593933: Wed Nov 20 11:10:54 2024 00:10:01.809 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:01.809 slat (nsec): min=8787, max=46665, avg=26674.40, stdev=3016.41 00:10:01.809 clat (usec): min=483, max=1250, avg=968.55, stdev=115.11 00:10:01.809 lat (usec): min=509, max=1277, avg=995.22, stdev=115.30 00:10:01.809 clat percentiles (usec): 00:10:01.809 | 1.00th=[ 652], 5.00th=[ 750], 10.00th=[ 807], 20.00th=[ 881], 00:10:01.809 | 30.00th=[ 922], 40.00th=[ 955], 50.00th=[ 979], 60.00th=[ 1012], 00:10:01.809 | 70.00th=[ 1037], 80.00th=[ 1057], 90.00th=[ 1106], 95.00th=[ 1139], 00:10:01.809 | 99.00th=[ 1188], 99.50th=[ 1205], 99.90th=[ 1254], 99.95th=[ 1254], 00:10:01.809 | 99.99th=[ 1254] 00:10:01.809 write: IOPS=802, BW=3209KiB/s (3286kB/s)(3212KiB/1001msec); 0 zone resets 00:10:01.809 slat (nsec): min=9845, max=71748, avg=31020.20, stdev=10413.58 00:10:01.809 clat (usec): min=206, max=2069, avg=567.65, stdev=147.48 00:10:01.809 lat (usec): min=217, max=2080, avg=598.67, stdev=150.15 00:10:01.809 clat percentiles (usec): 00:10:01.809 | 1.00th=[ 269], 5.00th=[ 334], 10.00th=[ 379], 20.00th=[ 453], 00:10:01.809 | 30.00th=[ 494], 40.00th=[ 537], 50.00th=[ 578], 60.00th=[ 603], 00:10:01.809 | 70.00th=[ 644], 80.00th=[ 685], 90.00th=[ 734], 95.00th=[ 766], 00:10:01.809 | 99.00th=[ 848], 99.50th=[ 898], 99.90th=[ 2073], 99.95th=[ 2073], 00:10:01.809 | 99.99th=[ 2073] 00:10:01.809 bw ( KiB/s): min= 4096, max= 4096, per=31.09%, avg=4096.00, stdev= 0.00, samples=1 00:10:01.809 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:01.809 lat (usec) : 250=0.46%, 500=18.56%, 750=39.47%, 1000=24.41% 00:10:01.809 lat (msec) : 2=17.03%, 4=0.08% 00:10:01.809 cpu : usr=1.40%, sys=4.60%, ctx=1317, majf=0, minf=1 00:10:01.809 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.809 issued rwts: total=512,803,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.809 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.809 job3: (groupid=0, jobs=1): err= 0: pid=2593940: Wed Nov 20 11:10:54 2024 00:10:01.809 read: IOPS=16, BW=66.6KiB/s (68.2kB/s)(68.0KiB/1021msec) 00:10:01.809 slat (nsec): min=27712, max=28948, avg=28185.35, stdev=376.19 00:10:01.809 clat (usec): min=40980, max=42993, avg=41957.30, stdev=363.51 00:10:01.809 lat (usec): min=41009, max=43021, avg=41985.49, stdev=363.36 00:10:01.809 clat percentiles (usec): 00:10:01.809 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:10:01.809 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:01.809 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[43254], 00:10:01.809 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:01.809 | 99.99th=[43254] 00:10:01.809 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:10:01.809 slat (nsec): min=9608, max=81015, avg=32247.61, stdev=10911.34 00:10:01.809 clat (usec): min=231, max=1628, avg=558.27, stdev=159.45 00:10:01.809 lat (usec): min=246, max=1668, avg=590.52, stdev=163.49 00:10:01.809 clat percentiles (usec): 00:10:01.809 | 1.00th=[ 269], 5.00th=[ 302], 10.00th=[ 351], 20.00th=[ 424], 00:10:01.809 | 30.00th=[ 474], 40.00th=[ 506], 50.00th=[ 545], 60.00th=[ 603], 00:10:01.809 | 70.00th=[ 644], 80.00th=[ 693], 90.00th=[ 758], 95.00th=[ 791], 00:10:01.809 | 99.00th=[ 889], 99.50th=[ 938], 99.90th=[ 1631], 99.95th=[ 1631], 00:10:01.809 | 99.99th=[ 1631] 00:10:01.809 bw ( KiB/s): min= 4096, max= 4096, per=31.09%, avg=4096.00, stdev= 0.00, samples=1 00:10:01.809 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:01.809 lat (usec) : 250=0.76%, 500=36.11%, 750=49.72%, 1000=9.83% 00:10:01.809 lat (msec) : 2=0.38%, 50=3.21% 00:10:01.809 cpu : usr=0.88%, sys=2.16%, ctx=531, majf=0, minf=1 00:10:01.809 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.809 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.809 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.809 00:10:01.809 Run status group 0 (all jobs): 00:10:01.809 READ: bw=7365KiB/s (7542kB/s), 66.6KiB/s-2737KiB/s (68.2kB/s-2803kB/s), io=7520KiB (7700kB), run=1001-1021msec 00:10:01.809 WRITE: bw=12.9MiB/s (13.5MB/s), 2006KiB/s-4092KiB/s (2054kB/s-4190kB/s), io=13.1MiB (13.8MB), run=1001-1021msec 00:10:01.809 00:10:01.809 Disk stats (read/write): 00:10:01.809 nvme0n1: ios=562/934, merge=0/0, ticks=414/384, in_queue=798, util=86.07% 00:10:01.809 nvme0n2: ios=535/950, merge=0/0, ticks=1349/383, in_queue=1732, util=97.04% 00:10:01.809 nvme0n3: ios=534/542, merge=0/0, ticks=1411/280, in_queue=1691, util=96.93% 00:10:01.809 nvme0n4: ios=69/512, merge=0/0, ticks=1304/224, in_queue=1528, util=96.90% 00:10:01.809 11:10:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:01.809 [global] 00:10:01.809 thread=1 00:10:01.809 invalidate=1 00:10:01.809 rw=randwrite 00:10:01.809 time_based=1 00:10:01.809 runtime=1 00:10:01.809 ioengine=libaio 00:10:01.809 direct=1 00:10:01.809 bs=4096 00:10:01.809 iodepth=1 00:10:01.809 norandommap=0 00:10:01.809 numjobs=1 00:10:01.809 00:10:01.809 verify_dump=1 00:10:01.809 verify_backlog=512 00:10:01.809 verify_state_save=0 00:10:01.809 do_verify=1 00:10:01.809 verify=crc32c-intel 00:10:01.809 [job0] 00:10:01.809 filename=/dev/nvme0n1 00:10:01.809 [job1] 00:10:01.809 filename=/dev/nvme0n2 00:10:01.809 [job2] 00:10:01.809 filename=/dev/nvme0n3 00:10:01.809 [job3] 00:10:01.809 filename=/dev/nvme0n4 00:10:01.809 Could not set queue depth (nvme0n1) 00:10:01.809 Could not set queue depth (nvme0n2) 00:10:01.809 Could not set queue depth (nvme0n3) 00:10:01.809 Could not set queue depth (nvme0n4) 00:10:02.072 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.072 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.072 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.072 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.072 fio-3.35 00:10:02.072 Starting 4 threads 00:10:03.459 00:10:03.459 job0: (groupid=0, jobs=1): err= 0: pid=2594411: Wed Nov 20 11:10:55 2024 00:10:03.459 read: IOPS=18, BW=75.6KiB/s (77.4kB/s)(76.0KiB/1005msec) 00:10:03.459 slat (nsec): min=10759, max=28181, avg=26764.53, stdev=3881.78 00:10:03.459 clat (usec): min=40851, max=42407, avg=41247.14, stdev=498.86 00:10:03.459 lat (usec): min=40879, max=42417, avg=41273.90, stdev=496.70 00:10:03.459 clat percentiles (usec): 00:10:03.459 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:03.459 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:03.459 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:10:03.459 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:03.459 | 99.99th=[42206] 00:10:03.459 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:10:03.459 slat (nsec): min=9632, max=66775, avg=27000.36, stdev=11903.46 00:10:03.459 clat (usec): min=143, max=662, avg=396.22, stdev=86.07 00:10:03.460 lat (usec): min=178, max=674, avg=423.22, stdev=90.53 00:10:03.460 clat percentiles (usec): 00:10:03.460 | 1.00th=[ 202], 5.00th=[ 262], 10.00th=[ 285], 20.00th=[ 314], 00:10:03.460 | 30.00th=[ 343], 40.00th=[ 367], 50.00th=[ 396], 60.00th=[ 433], 00:10:03.460 | 70.00th=[ 453], 80.00th=[ 474], 90.00th=[ 502], 95.00th=[ 519], 00:10:03.460 | 99.00th=[ 586], 99.50th=[ 603], 99.90th=[ 660], 99.95th=[ 660], 00:10:03.460 | 99.99th=[ 660] 00:10:03.460 bw ( KiB/s): min= 4096, max= 4096, per=42.98%, avg=4096.00, stdev= 0.00, samples=1 00:10:03.460 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:03.460 lat (usec) : 250=4.14%, 500=82.11%, 750=10.17% 00:10:03.460 lat (msec) : 50=3.58% 00:10:03.460 cpu : usr=1.00%, sys=1.10%, ctx=534, majf=0, minf=1 00:10:03.460 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:03.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.460 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.460 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:03.460 job1: (groupid=0, jobs=1): err= 0: pid=2594418: Wed Nov 20 11:10:55 2024 00:10:03.460 read: IOPS=27, BW=111KiB/s (114kB/s)(112KiB/1007msec) 00:10:03.460 slat (nsec): min=26211, max=27404, avg=26784.57, stdev=230.76 00:10:03.460 clat (usec): min=419, max=42372, avg=27039.79, stdev=19968.76 00:10:03.460 lat (usec): min=446, max=42399, avg=27066.58, stdev=19968.80 00:10:03.460 clat percentiles (usec): 00:10:03.460 | 1.00th=[ 420], 5.00th=[ 619], 10.00th=[ 652], 20.00th=[ 799], 00:10:03.460 | 30.00th=[ 857], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:03.460 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:03.460 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:03.460 | 99.99th=[42206] 00:10:03.460 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:10:03.460 slat (nsec): min=9737, max=53896, avg=29422.75, stdev=10091.44 00:10:03.460 clat (usec): min=103, max=905, avg=447.89, stdev=101.60 00:10:03.460 lat (usec): min=113, max=940, avg=477.31, stdev=106.90 00:10:03.460 clat percentiles (usec): 00:10:03.460 | 1.00th=[ 251], 5.00th=[ 281], 10.00th=[ 306], 20.00th=[ 351], 00:10:03.460 | 30.00th=[ 400], 40.00th=[ 433], 50.00th=[ 449], 60.00th=[ 478], 00:10:03.460 | 70.00th=[ 502], 80.00th=[ 537], 90.00th=[ 570], 95.00th=[ 594], 00:10:03.460 | 99.00th=[ 668], 99.50th=[ 725], 99.90th=[ 906], 99.95th=[ 906], 00:10:03.460 | 99.99th=[ 906] 00:10:03.460 bw ( KiB/s): min= 4096, max= 4096, per=42.98%, avg=4096.00, stdev= 0.00, samples=1 00:10:03.460 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:03.460 lat (usec) : 250=0.93%, 500=65.00%, 750=29.44%, 1000=1.30% 00:10:03.460 lat (msec) : 50=3.33% 00:10:03.460 cpu : usr=1.09%, sys=1.19%, ctx=541, majf=0, minf=1 00:10:03.460 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:03.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.460 issued rwts: total=28,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.460 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:03.460 job2: (groupid=0, jobs=1): err= 0: pid=2594425: Wed Nov 20 11:10:55 2024 00:10:03.460 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:03.460 slat (nsec): min=6921, max=67931, avg=27104.26, stdev=6447.20 00:10:03.460 clat (usec): min=454, max=1232, avg=947.15, stdev=85.70 00:10:03.460 lat (usec): min=465, max=1261, avg=974.25, stdev=87.86 00:10:03.460 clat percentiles (usec): 00:10:03.460 | 1.00th=[ 701], 5.00th=[ 783], 10.00th=[ 824], 20.00th=[ 889], 00:10:03.460 | 30.00th=[ 922], 40.00th=[ 947], 50.00th=[ 963], 60.00th=[ 979], 00:10:03.460 | 70.00th=[ 996], 80.00th=[ 1012], 90.00th=[ 1037], 95.00th=[ 1057], 00:10:03.460 | 99.00th=[ 1090], 99.50th=[ 1123], 99.90th=[ 1237], 99.95th=[ 1237], 00:10:03.460 | 99.99th=[ 1237] 00:10:03.460 write: IOPS=862, BW=3449KiB/s (3531kB/s)(3452KiB/1001msec); 0 zone resets 00:10:03.460 slat (nsec): min=9330, max=59494, avg=30310.48, stdev=10250.12 00:10:03.460 clat (usec): min=204, max=834, avg=537.94, stdev=109.15 00:10:03.460 lat (usec): min=215, max=861, avg=568.25, stdev=113.35 00:10:03.460 clat percentiles (usec): 00:10:03.460 | 1.00th=[ 260], 5.00th=[ 343], 10.00th=[ 388], 20.00th=[ 445], 00:10:03.460 | 30.00th=[ 494], 40.00th=[ 523], 50.00th=[ 537], 60.00th=[ 570], 00:10:03.460 | 70.00th=[ 603], 80.00th=[ 635], 90.00th=[ 668], 95.00th=[ 701], 00:10:03.460 | 99.00th=[ 775], 99.50th=[ 807], 99.90th=[ 832], 99.95th=[ 832], 00:10:03.460 | 99.99th=[ 832] 00:10:03.460 bw ( KiB/s): min= 4096, max= 4096, per=42.98%, avg=4096.00, stdev= 0.00, samples=1 00:10:03.460 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:03.460 lat (usec) : 250=0.51%, 500=19.05%, 750=42.91%, 1000=27.42% 00:10:03.460 lat (msec) : 2=10.11% 00:10:03.460 cpu : usr=2.10%, sys=5.30%, ctx=1376, majf=0, minf=1 00:10:03.460 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:03.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.460 issued rwts: total=512,863,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.460 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:03.460 job3: (groupid=0, jobs=1): err= 0: pid=2594432: Wed Nov 20 11:10:55 2024 00:10:03.460 read: IOPS=18, BW=75.6KiB/s (77.4kB/s)(76.0KiB/1005msec) 00:10:03.460 slat (nsec): min=28363, max=29461, avg=28857.21, stdev=246.47 00:10:03.460 clat (usec): min=911, max=42804, avg=35436.72, stdev=15348.15 00:10:03.460 lat (usec): min=939, max=42833, avg=35465.58, stdev=15348.22 00:10:03.460 clat percentiles (usec): 00:10:03.460 | 1.00th=[ 914], 5.00th=[ 914], 10.00th=[ 930], 20.00th=[41157], 00:10:03.460 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:10:03.460 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:10:03.460 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:03.460 | 99.99th=[42730] 00:10:03.460 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:10:03.460 slat (nsec): min=9666, max=54106, avg=31751.91, stdev=10159.41 00:10:03.460 clat (usec): min=219, max=981, avg=605.54, stdev=140.50 00:10:03.460 lat (usec): min=231, max=1016, avg=637.29, stdev=144.99 00:10:03.460 clat percentiles (usec): 00:10:03.460 | 1.00th=[ 277], 5.00th=[ 359], 10.00th=[ 416], 20.00th=[ 478], 00:10:03.460 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 611], 60.00th=[ 644], 00:10:03.460 | 70.00th=[ 693], 80.00th=[ 734], 90.00th=[ 783], 95.00th=[ 824], 00:10:03.460 | 99.00th=[ 906], 99.50th=[ 947], 99.90th=[ 979], 99.95th=[ 979], 00:10:03.460 | 99.99th=[ 979] 00:10:03.460 bw ( KiB/s): min= 4096, max= 4096, per=42.98%, avg=4096.00, stdev= 0.00, samples=1 00:10:03.460 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:03.460 lat (usec) : 250=0.19%, 500=22.22%, 750=58.95%, 1000=15.63% 00:10:03.460 lat (msec) : 50=3.01% 00:10:03.460 cpu : usr=1.20%, sys=1.99%, ctx=532, majf=0, minf=1 00:10:03.460 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:03.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.460 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.460 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:03.460 00:10:03.460 Run status group 0 (all jobs): 00:10:03.460 READ: bw=2296KiB/s (2351kB/s), 75.6KiB/s-2046KiB/s (77.4kB/s-2095kB/s), io=2312KiB (2367kB), run=1001-1007msec 00:10:03.460 WRITE: bw=9529KiB/s (9758kB/s), 2034KiB/s-3449KiB/s (2083kB/s-3531kB/s), io=9596KiB (9826kB), run=1001-1007msec 00:10:03.460 00:10:03.460 Disk stats (read/write): 00:10:03.461 nvme0n1: ios=57/512, merge=0/0, ticks=694/193, in_queue=887, util=86.27% 00:10:03.461 nvme0n2: ios=46/512, merge=0/0, ticks=1479/218, in_queue=1697, util=88.48% 00:10:03.461 nvme0n3: ios=566/583, merge=0/0, ticks=615/294, in_queue=909, util=95.46% 00:10:03.461 nvme0n4: ios=73/512, merge=0/0, ticks=773/238, in_queue=1011, util=94.24% 00:10:03.461 11:10:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:03.461 [global] 00:10:03.461 thread=1 00:10:03.461 invalidate=1 00:10:03.461 rw=write 00:10:03.461 time_based=1 00:10:03.461 runtime=1 00:10:03.461 ioengine=libaio 00:10:03.461 direct=1 00:10:03.461 bs=4096 00:10:03.461 iodepth=128 00:10:03.461 norandommap=0 00:10:03.461 numjobs=1 00:10:03.461 00:10:03.461 verify_dump=1 00:10:03.461 verify_backlog=512 00:10:03.461 verify_state_save=0 00:10:03.461 do_verify=1 00:10:03.461 verify=crc32c-intel 00:10:03.461 [job0] 00:10:03.461 filename=/dev/nvme0n1 00:10:03.461 [job1] 00:10:03.461 filename=/dev/nvme0n2 00:10:03.461 [job2] 00:10:03.461 filename=/dev/nvme0n3 00:10:03.461 [job3] 00:10:03.461 filename=/dev/nvme0n4 00:10:03.461 Could not set queue depth (nvme0n1) 00:10:03.461 Could not set queue depth (nvme0n2) 00:10:03.461 Could not set queue depth (nvme0n3) 00:10:03.461 Could not set queue depth (nvme0n4) 00:10:03.722 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:03.722 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:03.722 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:03.722 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:03.722 fio-3.35 00:10:03.722 Starting 4 threads 00:10:05.108 00:10:05.108 job0: (groupid=0, jobs=1): err= 0: pid=2594933: Wed Nov 20 11:10:57 2024 00:10:05.108 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:10:05.108 slat (nsec): min=945, max=24318k, avg=135291.63, stdev=1125752.96 00:10:05.108 clat (usec): min=2737, max=72150, avg=17197.01, stdev=15574.94 00:10:05.108 lat (usec): min=2743, max=72178, avg=17332.30, stdev=15701.77 00:10:05.108 clat percentiles (usec): 00:10:05.108 | 1.00th=[ 3490], 5.00th=[ 5145], 10.00th=[ 5604], 20.00th=[ 6194], 00:10:05.108 | 30.00th=[ 7504], 40.00th=[ 8291], 50.00th=[10683], 60.00th=[11600], 00:10:05.108 | 70.00th=[14222], 80.00th=[30016], 90.00th=[47973], 95.00th=[52691], 00:10:05.108 | 99.00th=[55837], 99.50th=[58459], 99.90th=[63701], 99.95th=[67634], 00:10:05.108 | 99.99th=[71828] 00:10:05.108 write: IOPS=4467, BW=17.5MiB/s (18.3MB/s)(17.5MiB/1001msec); 0 zone resets 00:10:05.108 slat (nsec): min=1599, max=24779k, avg=86330.76, stdev=793480.36 00:10:05.108 clat (usec): min=696, max=66619, avg=12614.33, stdev=11170.31 00:10:05.108 lat (usec): min=1059, max=66663, avg=12700.66, stdev=11249.23 00:10:05.108 clat percentiles (usec): 00:10:05.108 | 1.00th=[ 2147], 5.00th=[ 3720], 10.00th=[ 4228], 20.00th=[ 5014], 00:10:05.108 | 30.00th=[ 5407], 40.00th=[ 6980], 50.00th=[ 7635], 60.00th=[ 8848], 00:10:05.108 | 70.00th=[13435], 80.00th=[16581], 90.00th=[32900], 95.00th=[38011], 00:10:05.108 | 99.00th=[47449], 99.50th=[47449], 99.90th=[54264], 99.95th=[54264], 00:10:05.108 | 99.99th=[66847] 00:10:05.108 bw ( KiB/s): min=23760, max=23760, per=25.21%, avg=23760.00, stdev= 0.00, samples=1 00:10:05.108 iops : min= 5940, max= 5940, avg=5940.00, stdev= 0.00, samples=1 00:10:05.108 lat (usec) : 750=0.01% 00:10:05.108 lat (msec) : 2=0.37%, 4=4.12%, 10=49.05%, 20=24.93%, 50=17.19% 00:10:05.108 lat (msec) : 100=4.32% 00:10:05.108 cpu : usr=3.40%, sys=4.50%, ctx=309, majf=0, minf=1 00:10:05.108 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:05.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:05.108 issued rwts: total=4096,4472,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.108 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:05.108 job1: (groupid=0, jobs=1): err= 0: pid=2594938: Wed Nov 20 11:10:57 2024 00:10:05.108 read: IOPS=5718, BW=22.3MiB/s (23.4MB/s)(22.5MiB/1007msec) 00:10:05.108 slat (nsec): min=1029, max=11459k, avg=85089.99, stdev=612093.94 00:10:05.108 clat (usec): min=1541, max=40573, avg=10818.77, stdev=4401.31 00:10:05.108 lat (usec): min=3972, max=40583, avg=10903.86, stdev=4453.73 00:10:05.108 clat percentiles (usec): 00:10:05.108 | 1.00th=[ 5604], 5.00th=[ 6718], 10.00th=[ 7242], 20.00th=[ 7767], 00:10:05.108 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9503], 60.00th=[10290], 00:10:05.108 | 70.00th=[12387], 80.00th=[13698], 90.00th=[14615], 95.00th=[17433], 00:10:05.108 | 99.00th=[33424], 99.50th=[37487], 99.90th=[39584], 99.95th=[40633], 00:10:05.108 | 99.99th=[40633] 00:10:05.108 write: IOPS=6101, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1007msec); 0 zone resets 00:10:05.108 slat (nsec): min=1668, max=12387k, avg=75580.23, stdev=537455.71 00:10:05.108 clat (usec): min=1321, max=40436, avg=10636.52, stdev=5112.99 00:10:05.108 lat (usec): min=1333, max=40439, avg=10712.10, stdev=5154.76 00:10:05.108 clat percentiles (usec): 00:10:05.108 | 1.00th=[ 3818], 5.00th=[ 4686], 10.00th=[ 5276], 20.00th=[ 6456], 00:10:05.108 | 30.00th=[ 7046], 40.00th=[ 8291], 50.00th=[ 9241], 60.00th=[10552], 00:10:05.108 | 70.00th=[13042], 80.00th=[14091], 90.00th=[17957], 95.00th=[21890], 00:10:05.108 | 99.00th=[26870], 99.50th=[28181], 99.90th=[29492], 99.95th=[29492], 00:10:05.108 | 99.99th=[40633] 00:10:05.108 bw ( KiB/s): min=19504, max=29640, per=26.07%, avg=24572.00, stdev=7167.23, samples=2 00:10:05.108 iops : min= 4876, max= 7410, avg=6143.00, stdev=1791.81, samples=2 00:10:05.108 lat (msec) : 2=0.08%, 4=0.63%, 10=55.87%, 20=38.03%, 50=5.39% 00:10:05.108 cpu : usr=4.57%, sys=7.75%, ctx=375, majf=0, minf=2 00:10:05.108 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:05.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:05.108 issued rwts: total=5759,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.108 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:05.108 job2: (groupid=0, jobs=1): err= 0: pid=2594949: Wed Nov 20 11:10:57 2024 00:10:05.108 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:10:05.108 slat (nsec): min=983, max=9572.2k, avg=82068.45, stdev=592348.33 00:10:05.108 clat (usec): min=4504, max=28250, avg=10938.48, stdev=4092.34 00:10:05.108 lat (usec): min=4754, max=28276, avg=11020.55, stdev=4139.44 00:10:05.108 clat percentiles (usec): 00:10:05.108 | 1.00th=[ 6128], 5.00th=[ 7046], 10.00th=[ 7832], 20.00th=[ 8225], 00:10:05.108 | 30.00th=[ 8356], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 9241], 00:10:05.108 | 70.00th=[12387], 80.00th=[14222], 90.00th=[18220], 95.00th=[19268], 00:10:05.108 | 99.00th=[23725], 99.50th=[25035], 99.90th=[25297], 99.95th=[27132], 00:10:05.108 | 99.99th=[28181] 00:10:05.108 write: IOPS=6010, BW=23.5MiB/s (24.6MB/s)(23.6MiB/1003msec); 0 zone resets 00:10:05.108 slat (nsec): min=1666, max=12203k, avg=79904.13, stdev=566707.81 00:10:05.108 clat (usec): min=509, max=37602, avg=10892.67, stdev=5869.17 00:10:05.108 lat (usec): min=1154, max=37636, avg=10972.58, stdev=5907.42 00:10:05.108 clat percentiles (usec): 00:10:05.108 | 1.00th=[ 3687], 5.00th=[ 4228], 10.00th=[ 5473], 20.00th=[ 7177], 00:10:05.108 | 30.00th=[ 7963], 40.00th=[ 8356], 50.00th=[ 8717], 60.00th=[ 9241], 00:10:05.108 | 70.00th=[11731], 80.00th=[14353], 90.00th=[20055], 95.00th=[25822], 00:10:05.108 | 99.00th=[28967], 99.50th=[28967], 99.90th=[32637], 99.95th=[33817], 00:10:05.108 | 99.99th=[37487] 00:10:05.108 bw ( KiB/s): min=20608, max=26600, per=25.04%, avg=23604.00, stdev=4236.98, samples=2 00:10:05.108 iops : min= 5152, max= 6650, avg=5901.00, stdev=1059.25, samples=2 00:10:05.108 lat (usec) : 750=0.01% 00:10:05.108 lat (msec) : 2=0.16%, 4=1.48%, 10=62.14%, 20=29.77%, 50=6.45% 00:10:05.108 cpu : usr=3.39%, sys=7.58%, ctx=488, majf=0, minf=2 00:10:05.108 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:05.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:05.109 issued rwts: total=5632,6029,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.109 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:05.109 job3: (groupid=0, jobs=1): err= 0: pid=2594957: Wed Nov 20 11:10:57 2024 00:10:05.109 read: IOPS=6616, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1006msec) 00:10:05.109 slat (nsec): min=962, max=23057k, avg=72621.12, stdev=630447.67 00:10:05.109 clat (usec): min=2252, max=84083, avg=10087.76, stdev=8759.32 00:10:05.109 lat (usec): min=2257, max=84093, avg=10160.38, stdev=8817.43 00:10:05.109 clat percentiles (usec): 00:10:05.109 | 1.00th=[ 2606], 5.00th=[ 5800], 10.00th=[ 6325], 20.00th=[ 6849], 00:10:05.109 | 30.00th=[ 7439], 40.00th=[ 7963], 50.00th=[ 8586], 60.00th=[ 8979], 00:10:05.109 | 70.00th=[ 9503], 80.00th=[10552], 90.00th=[12387], 95.00th=[14877], 00:10:05.109 | 99.00th=[61080], 99.50th=[70779], 99.90th=[84411], 99.95th=[84411], 00:10:05.109 | 99.99th=[84411] 00:10:05.109 write: IOPS=7043, BW=27.5MiB/s (28.9MB/s)(27.7MiB/1006msec); 0 zone resets 00:10:05.109 slat (nsec): min=1679, max=12823k, avg=63246.36, stdev=471162.88 00:10:05.109 clat (usec): min=681, max=39992, avg=8532.97, stdev=4507.61 00:10:05.109 lat (usec): min=718, max=51717, avg=8596.22, stdev=4558.47 00:10:05.109 clat percentiles (usec): 00:10:05.109 | 1.00th=[ 1893], 5.00th=[ 3294], 10.00th=[ 4883], 20.00th=[ 6063], 00:10:05.109 | 30.00th=[ 6718], 40.00th=[ 7439], 50.00th=[ 8160], 60.00th=[ 8586], 00:10:05.109 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[11731], 95.00th=[17957], 00:10:05.109 | 99.00th=[26608], 99.50th=[34341], 99.90th=[40109], 99.95th=[40109], 00:10:05.109 | 99.99th=[40109] 00:10:05.109 bw ( KiB/s): min=26080, max=29584, per=29.53%, avg=27832.00, stdev=2477.70, samples=2 00:10:05.109 iops : min= 6520, max= 7396, avg=6958.00, stdev=619.43, samples=2 00:10:05.109 lat (usec) : 750=0.03% 00:10:05.109 lat (msec) : 2=0.57%, 4=3.75%, 10=76.87%, 20=15.41%, 50=2.39% 00:10:05.109 lat (msec) : 100=0.98% 00:10:05.109 cpu : usr=5.17%, sys=8.26%, ctx=667, majf=0, minf=2 00:10:05.109 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:05.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:05.109 issued rwts: total=6656,7086,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.109 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:05.109 00:10:05.109 Run status group 0 (all jobs): 00:10:05.109 READ: bw=85.9MiB/s (90.1MB/s), 16.0MiB/s-25.8MiB/s (16.8MB/s-27.1MB/s), io=86.5MiB (90.7MB), run=1001-1007msec 00:10:05.109 WRITE: bw=92.1MiB/s (96.5MB/s), 17.5MiB/s-27.5MiB/s (18.3MB/s-28.9MB/s), io=92.7MiB (97.2MB), run=1001-1007msec 00:10:05.109 00:10:05.109 Disk stats (read/write): 00:10:05.109 nvme0n1: ios=3640/3991, merge=0/0, ticks=27125/23548, in_queue=50673, util=86.47% 00:10:05.109 nvme0n2: ios=4657/4877, merge=0/0, ticks=49381/52138, in_queue=101519, util=88.38% 00:10:05.109 nvme0n3: ios=4670/4795, merge=0/0, ticks=36066/34579, in_queue=70645, util=92.93% 00:10:05.109 nvme0n4: ios=5693/5671, merge=0/0, ticks=47547/41347, in_queue=88894, util=94.56% 00:10:05.109 11:10:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:05.109 [global] 00:10:05.109 thread=1 00:10:05.109 invalidate=1 00:10:05.109 rw=randwrite 00:10:05.109 time_based=1 00:10:05.109 runtime=1 00:10:05.109 ioengine=libaio 00:10:05.109 direct=1 00:10:05.109 bs=4096 00:10:05.109 iodepth=128 00:10:05.109 norandommap=0 00:10:05.109 numjobs=1 00:10:05.109 00:10:05.109 verify_dump=1 00:10:05.109 verify_backlog=512 00:10:05.109 verify_state_save=0 00:10:05.109 do_verify=1 00:10:05.109 verify=crc32c-intel 00:10:05.109 [job0] 00:10:05.109 filename=/dev/nvme0n1 00:10:05.109 [job1] 00:10:05.109 filename=/dev/nvme0n2 00:10:05.109 [job2] 00:10:05.109 filename=/dev/nvme0n3 00:10:05.109 [job3] 00:10:05.109 filename=/dev/nvme0n4 00:10:05.109 Could not set queue depth (nvme0n1) 00:10:05.109 Could not set queue depth (nvme0n2) 00:10:05.109 Could not set queue depth (nvme0n3) 00:10:05.109 Could not set queue depth (nvme0n4) 00:10:05.370 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:05.370 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:05.370 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:05.370 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:05.370 fio-3.35 00:10:05.370 Starting 4 threads 00:10:06.754 00:10:06.754 job0: (groupid=0, jobs=1): err= 0: pid=2595458: Wed Nov 20 11:10:59 2024 00:10:06.754 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.1MiB/1009msec) 00:10:06.754 slat (nsec): min=984, max=20604k, avg=121893.80, stdev=851468.37 00:10:06.754 clat (usec): min=4944, max=87728, avg=14652.29, stdev=9667.02 00:10:06.754 lat (usec): min=4953, max=87736, avg=14774.19, stdev=9760.28 00:10:06.754 clat percentiles (usec): 00:10:06.754 | 1.00th=[ 5407], 5.00th=[ 7046], 10.00th=[ 8586], 20.00th=[10290], 00:10:06.754 | 30.00th=[11469], 40.00th=[11994], 50.00th=[12518], 60.00th=[13304], 00:10:06.754 | 70.00th=[15008], 80.00th=[16909], 90.00th=[19530], 95.00th=[24511], 00:10:06.754 | 99.00th=[73925], 99.50th=[84411], 99.90th=[87557], 99.95th=[87557], 00:10:06.754 | 99.99th=[87557] 00:10:06.754 write: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec); 0 zone resets 00:10:06.754 slat (nsec): min=1736, max=13933k, avg=130991.19, stdev=733903.92 00:10:06.754 clat (usec): min=2595, max=87707, avg=18312.90, stdev=12479.25 00:10:06.754 lat (usec): min=2602, max=87712, avg=18443.89, stdev=12542.77 00:10:06.754 clat percentiles (usec): 00:10:06.754 | 1.00th=[ 4113], 5.00th=[ 4621], 10.00th=[ 6718], 20.00th=[ 8356], 00:10:06.754 | 30.00th=[10028], 40.00th=[13435], 50.00th=[16909], 60.00th=[19268], 00:10:06.754 | 70.00th=[20317], 80.00th=[24249], 90.00th=[32900], 95.00th=[42730], 00:10:06.754 | 99.00th=[64226], 99.50th=[65274], 99.90th=[67634], 99.95th=[87557], 00:10:06.754 | 99.99th=[87557] 00:10:06.754 bw ( KiB/s): min=14640, max=17288, per=23.13%, avg=15964.00, stdev=1872.42, samples=2 00:10:06.754 iops : min= 3660, max= 4322, avg=3991.00, stdev=468.10, samples=2 00:10:06.754 lat (msec) : 4=0.19%, 10=23.54%, 20=55.60%, 50=17.59%, 100=3.08% 00:10:06.754 cpu : usr=2.88%, sys=4.46%, ctx=338, majf=0, minf=1 00:10:06.754 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:06.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:06.754 issued rwts: total=3606,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.754 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:06.755 job1: (groupid=0, jobs=1): err= 0: pid=2595467: Wed Nov 20 11:10:59 2024 00:10:06.755 read: IOPS=1767, BW=7070KiB/s (7239kB/s)(7112KiB/1006msec) 00:10:06.755 slat (nsec): min=903, max=22396k, avg=249710.38, stdev=1540508.44 00:10:06.755 clat (usec): min=2753, max=91853, avg=28480.49, stdev=19525.39 00:10:06.755 lat (usec): min=6651, max=91879, avg=28730.20, stdev=19698.16 00:10:06.755 clat percentiles (usec): 00:10:06.755 | 1.00th=[ 6783], 5.00th=[13173], 10.00th=[14484], 20.00th=[14746], 00:10:06.755 | 30.00th=[15139], 40.00th=[16909], 50.00th=[20317], 60.00th=[22676], 00:10:06.755 | 70.00th=[30016], 80.00th=[41157], 90.00th=[61604], 95.00th=[74974], 00:10:06.755 | 99.00th=[81265], 99.50th=[85459], 99.90th=[89654], 99.95th=[91751], 00:10:06.755 | 99.99th=[91751] 00:10:06.755 write: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec); 0 zone resets 00:10:06.755 slat (nsec): min=1596, max=25232k, avg=266759.89, stdev=1407533.55 00:10:06.755 clat (usec): min=1222, max=102193, avg=37598.39, stdev=26702.47 00:10:06.755 lat (usec): min=1231, max=102201, avg=37865.15, stdev=26874.60 00:10:06.755 clat percentiles (usec): 00:10:06.755 | 1.00th=[ 1729], 5.00th=[ 5932], 10.00th=[ 9896], 20.00th=[ 14615], 00:10:06.755 | 30.00th=[ 17957], 40.00th=[ 20579], 50.00th=[ 29492], 60.00th=[ 39584], 00:10:06.755 | 70.00th=[ 52691], 80.00th=[ 65274], 90.00th=[ 77071], 95.00th=[ 87557], 00:10:06.755 | 99.00th=[100140], 99.50th=[101188], 99.90th=[102237], 99.95th=[102237], 00:10:06.755 | 99.99th=[102237] 00:10:06.755 bw ( KiB/s): min= 8192, max= 8192, per=11.87%, avg=8192.00, stdev= 0.00, samples=2 00:10:06.755 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:10:06.755 lat (msec) : 2=0.60%, 4=0.94%, 10=5.65%, 20=36.33%, 50=31.52% 00:10:06.755 lat (msec) : 100=24.41%, 250=0.55% 00:10:06.755 cpu : usr=1.49%, sys=2.19%, ctx=215, majf=0, minf=2 00:10:06.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:10:06.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:06.755 issued rwts: total=1778,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.755 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:06.755 job2: (groupid=0, jobs=1): err= 0: pid=2595476: Wed Nov 20 11:10:59 2024 00:10:06.755 read: IOPS=3178, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1006msec) 00:10:06.755 slat (nsec): min=989, max=17272k, avg=131841.74, stdev=823979.65 00:10:06.755 clat (usec): min=4639, max=65847, avg=15906.11, stdev=9116.13 00:10:06.755 lat (usec): min=4648, max=65855, avg=16037.95, stdev=9181.13 00:10:06.755 clat percentiles (usec): 00:10:06.755 | 1.00th=[ 5997], 5.00th=[ 7504], 10.00th=[ 8225], 20.00th=[ 9110], 00:10:06.755 | 30.00th=[ 9896], 40.00th=[12256], 50.00th=[13173], 60.00th=[15008], 00:10:06.755 | 70.00th=[17957], 80.00th=[21365], 90.00th=[27919], 95.00th=[32900], 00:10:06.755 | 99.00th=[53740], 99.50th=[61080], 99.90th=[65799], 99.95th=[65799], 00:10:06.755 | 99.99th=[65799] 00:10:06.755 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:10:06.755 slat (nsec): min=1629, max=13372k, avg=155427.50, stdev=824926.74 00:10:06.755 clat (usec): min=1146, max=88493, avg=21411.58, stdev=16822.99 00:10:06.755 lat (usec): min=1157, max=88501, avg=21567.01, stdev=16929.52 00:10:06.755 clat percentiles (usec): 00:10:06.755 | 1.00th=[ 4359], 5.00th=[ 5014], 10.00th=[ 6194], 20.00th=[ 7504], 00:10:06.755 | 30.00th=[10552], 40.00th=[16712], 50.00th=[18744], 60.00th=[19792], 00:10:06.755 | 70.00th=[21890], 80.00th=[26870], 90.00th=[47449], 95.00th=[62653], 00:10:06.755 | 99.00th=[78119], 99.50th=[83362], 99.90th=[88605], 99.95th=[88605], 00:10:06.755 | 99.99th=[88605] 00:10:06.755 bw ( KiB/s): min=14280, max=14384, per=20.77%, avg=14332.00, stdev=73.54, samples=2 00:10:06.755 iops : min= 3570, max= 3596, avg=3583.00, stdev=18.38, samples=2 00:10:06.755 lat (msec) : 2=0.03%, 4=0.38%, 10=29.58%, 20=39.46%, 50=24.95% 00:10:06.755 lat (msec) : 100=5.60% 00:10:06.755 cpu : usr=3.08%, sys=3.88%, ctx=337, majf=0, minf=2 00:10:06.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:06.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:06.755 issued rwts: total=3198,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.755 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:06.755 job3: (groupid=0, jobs=1): err= 0: pid=2595484: Wed Nov 20 11:10:59 2024 00:10:06.755 read: IOPS=7293, BW=28.5MiB/s (29.9MB/s)(28.7MiB/1006msec) 00:10:06.755 slat (nsec): min=936, max=14965k, avg=54905.41, stdev=494272.44 00:10:06.755 clat (usec): min=1882, max=27783, avg=8785.18, stdev=3993.73 00:10:06.755 lat (usec): min=1892, max=27792, avg=8840.08, stdev=4012.58 00:10:06.755 clat percentiles (usec): 00:10:06.755 | 1.00th=[ 3392], 5.00th=[ 5145], 10.00th=[ 5997], 20.00th=[ 6259], 00:10:06.755 | 30.00th=[ 6521], 40.00th=[ 6849], 50.00th=[ 7177], 60.00th=[ 7898], 00:10:06.755 | 70.00th=[ 8717], 80.00th=[10683], 90.00th=[14091], 95.00th=[18482], 00:10:06.755 | 99.00th=[23987], 99.50th=[23987], 99.90th=[23987], 99.95th=[26870], 00:10:06.755 | 99.99th=[27657] 00:10:06.755 write: IOPS=7634, BW=29.8MiB/s (31.3MB/s)(30.0MiB/1006msec); 0 zone resets 00:10:06.755 slat (nsec): min=1535, max=20842k, avg=56379.82, stdev=565888.44 00:10:06.755 clat (usec): min=993, max=40177, avg=8157.46, stdev=4708.50 00:10:06.755 lat (usec): min=1001, max=40179, avg=8213.84, stdev=4759.32 00:10:06.755 clat percentiles (usec): 00:10:06.755 | 1.00th=[ 1975], 5.00th=[ 3458], 10.00th=[ 4080], 20.00th=[ 4948], 00:10:06.755 | 30.00th=[ 6063], 40.00th=[ 6390], 50.00th=[ 6718], 60.00th=[ 7177], 00:10:06.755 | 70.00th=[ 9110], 80.00th=[10290], 90.00th=[12911], 95.00th=[18744], 00:10:06.755 | 99.00th=[25297], 99.50th=[31327], 99.90th=[38011], 99.95th=[40109], 00:10:06.755 | 99.99th=[40109] 00:10:06.755 bw ( KiB/s): min=30368, max=30840, per=44.35%, avg=30604.00, stdev=333.75, samples=2 00:10:06.755 iops : min= 7592, max= 7710, avg=7651.00, stdev=83.44, samples=2 00:10:06.755 lat (usec) : 1000=0.03% 00:10:06.755 lat (msec) : 2=0.73%, 4=4.75%, 10=71.21%, 20=20.19%, 50=3.08% 00:10:06.755 cpu : usr=6.67%, sys=8.76%, ctx=420, majf=0, minf=1 00:10:06.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:06.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:06.755 issued rwts: total=7337,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.755 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:06.755 00:10:06.755 Run status group 0 (all jobs): 00:10:06.755 READ: bw=61.6MiB/s (64.6MB/s), 7070KiB/s-28.5MiB/s (7239kB/s-29.9MB/s), io=62.2MiB (65.2MB), run=1006-1009msec 00:10:06.755 WRITE: bw=67.4MiB/s (70.7MB/s), 8143KiB/s-29.8MiB/s (8339kB/s-31.3MB/s), io=68.0MiB (71.3MB), run=1006-1009msec 00:10:06.755 00:10:06.755 Disk stats (read/write): 00:10:06.755 nvme0n1: ios=3051/3155, merge=0/0, ticks=43720/58257, in_queue=101977, util=97.09% 00:10:06.755 nvme0n2: ios=1559/1671, merge=0/0, ticks=16339/18350, in_queue=34689, util=87.16% 00:10:06.755 nvme0n3: ios=2995/3072, merge=0/0, ticks=43576/58378, in_queue=101954, util=91.24% 00:10:06.755 nvme0n4: ios=6538/6656, merge=0/0, ticks=51066/46617, in_queue=97683, util=100.00% 00:10:06.755 11:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:06.755 11:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2595774 00:10:06.755 11:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:06.755 11:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:06.755 [global] 00:10:06.755 thread=1 00:10:06.755 invalidate=1 00:10:06.755 rw=read 00:10:06.755 time_based=1 00:10:06.755 runtime=10 00:10:06.755 ioengine=libaio 00:10:06.755 direct=1 00:10:06.755 bs=4096 00:10:06.755 iodepth=1 00:10:06.755 norandommap=1 00:10:06.755 numjobs=1 00:10:06.755 00:10:06.755 [job0] 00:10:06.755 filename=/dev/nvme0n1 00:10:06.755 [job1] 00:10:06.755 filename=/dev/nvme0n2 00:10:06.755 [job2] 00:10:06.755 filename=/dev/nvme0n3 00:10:06.755 [job3] 00:10:06.755 filename=/dev/nvme0n4 00:10:06.755 Could not set queue depth (nvme0n1) 00:10:06.755 Could not set queue depth (nvme0n2) 00:10:06.755 Could not set queue depth (nvme0n3) 00:10:06.755 Could not set queue depth (nvme0n4) 00:10:07.052 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.052 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.052 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.052 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.052 fio-3.35 00:10:07.052 Starting 4 threads 00:10:09.662 11:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:09.923 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=12275712, buflen=4096 00:10:09.923 fio: pid=2596012, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:09.923 11:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:09.923 11:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:09.923 11:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:09.923 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=1044480, buflen=4096 00:10:09.923 fio: pid=2596003, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:10.185 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=13557760, buflen=4096 00:10:10.185 fio: pid=2595986, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:10.185 11:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:10.185 11:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:10.446 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=1335296, buflen=4096 00:10:10.446 fio: pid=2595995, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:10:10.446 11:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:10.446 11:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:10.446 00:10:10.446 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2595986: Wed Nov 20 11:11:03 2024 00:10:10.446 read: IOPS=1124, BW=4496KiB/s (4604kB/s)(12.9MiB/2945msec) 00:10:10.446 slat (usec): min=5, max=32174, avg=40.17, stdev=592.96 00:10:10.446 clat (usec): min=168, max=11522, avg=837.07, stdev=282.73 00:10:10.446 lat (usec): min=174, max=33111, avg=877.24, stdev=658.20 00:10:10.446 clat percentiles (usec): 00:10:10.446 | 1.00th=[ 502], 5.00th=[ 635], 10.00th=[ 676], 20.00th=[ 725], 00:10:10.446 | 30.00th=[ 766], 40.00th=[ 791], 50.00th=[ 816], 60.00th=[ 840], 00:10:10.446 | 70.00th=[ 898], 80.00th=[ 947], 90.00th=[ 996], 95.00th=[ 1037], 00:10:10.446 | 99.00th=[ 1139], 99.50th=[ 1467], 99.90th=[ 2606], 99.95th=[ 8586], 00:10:10.446 | 99.99th=[11469] 00:10:10.446 bw ( KiB/s): min= 4184, max= 4976, per=52.04%, avg=4601.60, stdev=362.67, samples=5 00:10:10.446 iops : min= 1046, max= 1244, avg=1150.40, stdev=90.67, samples=5 00:10:10.446 lat (usec) : 250=0.15%, 500=0.82%, 750=24.68%, 1000=65.15% 00:10:10.446 lat (msec) : 2=8.94%, 4=0.15%, 10=0.06%, 20=0.03% 00:10:10.446 cpu : usr=1.15%, sys=3.87%, ctx=3315, majf=0, minf=2 00:10:10.446 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.447 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.447 issued rwts: total=3311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.447 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.447 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=2595995: Wed Nov 20 11:11:03 2024 00:10:10.447 read: IOPS=104, BW=418KiB/s (429kB/s)(1304KiB/3116msec) 00:10:10.447 slat (usec): min=6, max=19725, avg=134.82, stdev=1255.83 00:10:10.447 clat (usec): min=382, max=42060, avg=9414.32, stdev=16758.34 00:10:10.447 lat (usec): min=408, max=61058, avg=9526.92, stdev=16968.08 00:10:10.447 clat percentiles (usec): 00:10:10.447 | 1.00th=[ 537], 5.00th=[ 611], 10.00th=[ 652], 20.00th=[ 701], 00:10:10.447 | 30.00th=[ 742], 40.00th=[ 766], 50.00th=[ 783], 60.00th=[ 791], 00:10:10.447 | 70.00th=[ 816], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:10:10.447 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:10.447 | 99.99th=[42206] 00:10:10.447 bw ( KiB/s): min= 89, max= 2096, per=4.86%, avg=430.83, stdev=815.80, samples=6 00:10:10.447 iops : min= 22, max= 524, avg=107.67, stdev=203.97, samples=6 00:10:10.447 lat (usec) : 500=0.61%, 750=33.33%, 1000=44.34% 00:10:10.447 lat (msec) : 2=0.31%, 50=21.10% 00:10:10.447 cpu : usr=0.06%, sys=0.55%, ctx=330, majf=0, minf=2 00:10:10.447 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.447 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.447 issued rwts: total=327,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.447 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.447 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2596003: Wed Nov 20 11:11:03 2024 00:10:10.447 read: IOPS=92, BW=367KiB/s (376kB/s)(1020KiB/2778msec) 00:10:10.447 slat (nsec): min=7105, max=39065, avg=22772.26, stdev=7796.28 00:10:10.447 clat (usec): min=466, max=43065, avg=10779.59, stdev=17740.82 00:10:10.447 lat (usec): min=493, max=43086, avg=10802.33, stdev=17743.15 00:10:10.447 clat percentiles (usec): 00:10:10.447 | 1.00th=[ 553], 5.00th=[ 635], 10.00th=[ 652], 20.00th=[ 701], 00:10:10.447 | 30.00th=[ 742], 40.00th=[ 758], 50.00th=[ 783], 60.00th=[ 799], 00:10:10.447 | 70.00th=[ 848], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:10:10.447 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:10.447 | 99.99th=[43254] 00:10:10.447 bw ( KiB/s): min= 96, max= 944, per=4.50%, avg=398.40, stdev=416.02, samples=5 00:10:10.447 iops : min= 24, max= 236, avg=99.60, stdev=104.00, samples=5 00:10:10.447 lat (usec) : 500=0.78%, 750=34.77%, 1000=39.45% 00:10:10.447 lat (msec) : 2=0.39%, 50=24.22% 00:10:10.447 cpu : usr=0.07%, sys=0.29%, ctx=256, majf=0, minf=1 00:10:10.447 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.447 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.447 issued rwts: total=256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.447 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.447 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2596012: Wed Nov 20 11:11:03 2024 00:10:10.447 read: IOPS=1169, BW=4677KiB/s (4790kB/s)(11.7MiB/2563msec) 00:10:10.447 slat (nsec): min=6931, max=62021, avg=25274.02, stdev=6598.09 00:10:10.447 clat (usec): min=254, max=2089, avg=820.84, stdev=145.16 00:10:10.447 lat (usec): min=280, max=2116, avg=846.11, stdev=146.98 00:10:10.447 clat percentiles (usec): 00:10:10.447 | 1.00th=[ 486], 5.00th=[ 603], 10.00th=[ 652], 20.00th=[ 709], 00:10:10.447 | 30.00th=[ 742], 40.00th=[ 766], 50.00th=[ 791], 60.00th=[ 824], 00:10:10.447 | 70.00th=[ 930], 80.00th=[ 979], 90.00th=[ 1012], 95.00th=[ 1037], 00:10:10.447 | 99.00th=[ 1090], 99.50th=[ 1123], 99.90th=[ 1450], 99.95th=[ 1680], 00:10:10.447 | 99.99th=[ 2089] 00:10:10.447 bw ( KiB/s): min= 4072, max= 5288, per=52.97%, avg=4684.80, stdev=575.24, samples=5 00:10:10.447 iops : min= 1018, max= 1322, avg=1171.20, stdev=143.81, samples=5 00:10:10.447 lat (usec) : 500=1.13%, 750=31.65%, 1000=53.80% 00:10:10.447 lat (msec) : 2=13.34%, 4=0.03% 00:10:10.447 cpu : usr=1.60%, sys=3.83%, ctx=2998, majf=0, minf=2 00:10:10.447 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.447 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.447 issued rwts: total=2998,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.447 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.447 00:10:10.447 Run status group 0 (all jobs): 00:10:10.447 READ: bw=8842KiB/s (9054kB/s), 367KiB/s-4677KiB/s (376kB/s-4790kB/s), io=26.9MiB (28.2MB), run=2563-3116msec 00:10:10.447 00:10:10.447 Disk stats (read/write): 00:10:10.447 nvme0n1: ios=3196/0, merge=0/0, ticks=2472/0, in_queue=2472, util=92.99% 00:10:10.447 nvme0n2: ios=325/0, merge=0/0, ticks=3027/0, in_queue=3027, util=94.76% 00:10:10.447 nvme0n3: ios=250/0, merge=0/0, ticks=2538/0, in_queue=2538, util=95.92% 00:10:10.447 nvme0n4: ios=2744/0, merge=0/0, ticks=2196/0, in_queue=2196, util=96.05% 00:10:10.447 11:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:10.447 11:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:10.723 11:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:10.723 11:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:10.984 11:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:10.984 11:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:11.243 11:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:11.243 11:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:11.243 11:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:11.243 11:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2595774 00:10:11.243 11:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:11.243 11:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:11.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.504 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:11.504 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:11.504 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:11.504 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:11.504 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:11.504 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:11.504 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:11.504 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:11.504 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:11.505 nvmf hotplug test: fio failed as expected 00:10:11.505 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:11.505 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:11.505 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:11.505 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:11.505 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:11.505 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:11.505 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:11.505 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:11.505 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:11.505 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:11.505 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:11.505 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:11.505 rmmod nvme_tcp 00:10:11.765 rmmod nvme_fabrics 00:10:11.765 rmmod nvme_keyring 00:10:11.765 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:11.765 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:11.765 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:11.765 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2592200 ']' 00:10:11.765 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2592200 00:10:11.765 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2592200 ']' 00:10:11.765 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2592200 00:10:11.765 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:11.765 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:11.765 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2592200 00:10:11.765 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:11.765 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:11.765 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2592200' 00:10:11.765 killing process with pid 2592200 00:10:11.765 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2592200 00:10:11.765 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2592200 00:10:11.765 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:11.765 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:11.765 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:11.765 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:11.765 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:11.765 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:11.765 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:11.765 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:11.765 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:11.765 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.765 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.765 11:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.309 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:14.309 00:10:14.309 real 0m29.322s 00:10:14.309 user 2m39.036s 00:10:14.309 sys 0m9.514s 00:10:14.309 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.310 ************************************ 00:10:14.310 END TEST nvmf_fio_target 00:10:14.310 ************************************ 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:14.310 ************************************ 00:10:14.310 START TEST nvmf_bdevio 00:10:14.310 ************************************ 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:14.310 * Looking for test storage... 00:10:14.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:14.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.310 --rc genhtml_branch_coverage=1 00:10:14.310 --rc genhtml_function_coverage=1 00:10:14.310 --rc genhtml_legend=1 00:10:14.310 --rc geninfo_all_blocks=1 00:10:14.310 --rc geninfo_unexecuted_blocks=1 00:10:14.310 00:10:14.310 ' 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:14.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.310 --rc genhtml_branch_coverage=1 00:10:14.310 --rc genhtml_function_coverage=1 00:10:14.310 --rc genhtml_legend=1 00:10:14.310 --rc geninfo_all_blocks=1 00:10:14.310 --rc geninfo_unexecuted_blocks=1 00:10:14.310 00:10:14.310 ' 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:14.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.310 --rc genhtml_branch_coverage=1 00:10:14.310 --rc genhtml_function_coverage=1 00:10:14.310 --rc genhtml_legend=1 00:10:14.310 --rc geninfo_all_blocks=1 00:10:14.310 --rc geninfo_unexecuted_blocks=1 00:10:14.310 00:10:14.310 ' 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:14.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.310 --rc genhtml_branch_coverage=1 00:10:14.310 --rc genhtml_function_coverage=1 00:10:14.310 --rc genhtml_legend=1 00:10:14.310 --rc geninfo_all_blocks=1 00:10:14.310 --rc geninfo_unexecuted_blocks=1 00:10:14.310 00:10:14.310 ' 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:14.310 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.311 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:14.311 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:14.311 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:14.311 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:14.311 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.311 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.311 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:14.311 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:14.311 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:14.311 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:14.311 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:14.311 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:14.311 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:14.311 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:14.311 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:14.311 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:14.311 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:14.311 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:14.311 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:14.311 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.311 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.311 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.311 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:14.311 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:14.311 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:14.311 11:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.448 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:22.448 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:22.448 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:22.448 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:22.448 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:22.448 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:22.448 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:22.449 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:22.449 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:22.449 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:22.449 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:22.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:22.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.588 ms 00:10:22.449 00:10:22.449 --- 10.0.0.2 ping statistics --- 00:10:22.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.449 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:22.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:22.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:10:22.449 00:10:22.449 --- 10.0.0.1 ping statistics --- 00:10:22.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.449 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:22.449 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:22.450 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:22.450 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:22.450 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:22.450 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.450 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2601326 00:10:22.450 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2601326 00:10:22.450 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:22.450 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2601326 ']' 00:10:22.450 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.450 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:22.450 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.450 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:22.450 11:11:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.450 [2024-11-20 11:11:14.506332] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:10:22.450 [2024-11-20 11:11:14.506397] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.450 [2024-11-20 11:11:14.607348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:22.450 [2024-11-20 11:11:14.659437] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:22.450 [2024-11-20 11:11:14.659486] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:22.450 [2024-11-20 11:11:14.659495] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:22.450 [2024-11-20 11:11:14.659502] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:22.450 [2024-11-20 11:11:14.659508] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:22.450 [2024-11-20 11:11:14.661536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:22.450 [2024-11-20 11:11:14.661691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:22.450 [2024-11-20 11:11:14.661849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:22.450 [2024-11-20 11:11:14.661850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:22.710 11:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:22.710 11:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:22.710 11:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:22.710 11:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:22.710 11:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.710 11:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:22.710 11:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:22.710 11:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.710 11:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.710 [2024-11-20 11:11:15.385171] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:22.710 11:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.710 11:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:22.710 11:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.710 11:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.710 Malloc0 00:10:22.710 11:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.710 11:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:22.710 11:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.710 11:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.710 11:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.710 11:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:22.710 11:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.710 11:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.971 11:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.971 11:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:22.971 11:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.971 11:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.971 [2024-11-20 11:11:15.461060] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:22.971 11:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.971 11:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:22.972 11:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:22.972 11:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:22.972 11:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:22.972 11:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:22.972 11:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:22.972 { 00:10:22.972 "params": { 00:10:22.972 "name": "Nvme$subsystem", 00:10:22.972 "trtype": "$TEST_TRANSPORT", 00:10:22.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:22.972 "adrfam": "ipv4", 00:10:22.972 "trsvcid": "$NVMF_PORT", 00:10:22.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:22.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:22.972 "hdgst": ${hdgst:-false}, 00:10:22.972 "ddgst": ${ddgst:-false} 00:10:22.972 }, 00:10:22.972 "method": "bdev_nvme_attach_controller" 00:10:22.972 } 00:10:22.972 EOF 00:10:22.972 )") 00:10:22.972 11:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:22.972 11:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:22.972 11:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:22.972 11:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:22.972 "params": { 00:10:22.972 "name": "Nvme1", 00:10:22.972 "trtype": "tcp", 00:10:22.972 "traddr": "10.0.0.2", 00:10:22.972 "adrfam": "ipv4", 00:10:22.972 "trsvcid": "4420", 00:10:22.972 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:22.972 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:22.972 "hdgst": false, 00:10:22.972 "ddgst": false 00:10:22.972 }, 00:10:22.972 "method": "bdev_nvme_attach_controller" 00:10:22.972 }' 00:10:22.972 [2024-11-20 11:11:15.519400] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:10:22.972 [2024-11-20 11:11:15.519473] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2601416 ] 00:10:22.972 [2024-11-20 11:11:15.615008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:22.972 [2024-11-20 11:11:15.672710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.972 [2024-11-20 11:11:15.672880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.972 [2024-11-20 11:11:15.672880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:23.544 I/O targets: 00:10:23.544 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:23.544 00:10:23.544 00:10:23.544 CUnit - A unit testing framework for C - Version 2.1-3 00:10:23.544 http://cunit.sourceforge.net/ 00:10:23.544 00:10:23.544 00:10:23.544 Suite: bdevio tests on: Nvme1n1 00:10:23.544 Test: blockdev write read block ...passed 00:10:23.544 Test: blockdev write zeroes read block ...passed 00:10:23.544 Test: blockdev write zeroes read no split ...passed 00:10:23.544 Test: blockdev write zeroes read split ...passed 00:10:23.544 Test: blockdev write zeroes read split partial ...passed 00:10:23.544 Test: blockdev reset ...[2024-11-20 11:11:16.178102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:23.544 [2024-11-20 11:11:16.178205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd7970 (9): Bad file descriptor 00:10:23.544 [2024-11-20 11:11:16.231922] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:23.544 passed 00:10:23.544 Test: blockdev write read 8 blocks ...passed 00:10:23.544 Test: blockdev write read size > 128k ...passed 00:10:23.544 Test: blockdev write read invalid size ...passed 00:10:23.805 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:23.805 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:23.805 Test: blockdev write read max offset ...passed 00:10:23.805 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:23.805 Test: blockdev writev readv 8 blocks ...passed 00:10:23.805 Test: blockdev writev readv 30 x 1block ...passed 00:10:23.805 Test: blockdev writev readv block ...passed 00:10:23.805 Test: blockdev writev readv size > 128k ...passed 00:10:23.805 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:23.805 Test: blockdev comparev and writev ...[2024-11-20 11:11:16.498553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.805 [2024-11-20 11:11:16.498601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:23.805 [2024-11-20 11:11:16.498618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.805 [2024-11-20 11:11:16.498628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:23.805 [2024-11-20 11:11:16.499221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.805 [2024-11-20 11:11:16.499235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:23.805 [2024-11-20 11:11:16.499250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.805 [2024-11-20 11:11:16.499259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:23.805 [2024-11-20 11:11:16.499832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.805 [2024-11-20 11:11:16.499843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:23.805 [2024-11-20 11:11:16.499857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.805 [2024-11-20 11:11:16.499866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:23.805 [2024-11-20 11:11:16.500413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.805 [2024-11-20 11:11:16.500425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:23.805 [2024-11-20 11:11:16.500440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.805 [2024-11-20 11:11:16.500449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:23.805 passed 00:10:24.066 Test: blockdev nvme passthru rw ...passed 00:10:24.066 Test: blockdev nvme passthru vendor specific ...[2024-11-20 11:11:16.585035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:24.066 [2024-11-20 11:11:16.585059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:24.066 [2024-11-20 11:11:16.585429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:24.066 [2024-11-20 11:11:16.585441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:24.066 [2024-11-20 11:11:16.585820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:24.066 [2024-11-20 11:11:16.585830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:24.066 [2024-11-20 11:11:16.586213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:24.066 [2024-11-20 11:11:16.586224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:24.066 passed 00:10:24.066 Test: blockdev nvme admin passthru ...passed 00:10:24.066 Test: blockdev copy ...passed 00:10:24.066 00:10:24.066 Run Summary: Type Total Ran Passed Failed Inactive 00:10:24.066 suites 1 1 n/a 0 0 00:10:24.066 tests 23 23 23 0 0 00:10:24.066 asserts 152 152 152 0 n/a 00:10:24.066 00:10:24.066 Elapsed time = 1.280 seconds 00:10:24.066 11:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:24.066 11:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.066 11:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:24.066 11:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.066 11:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:24.066 11:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:24.066 11:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:24.066 11:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:24.066 11:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:24.066 11:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:24.066 11:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:24.066 11:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:24.066 rmmod nvme_tcp 00:10:24.066 rmmod nvme_fabrics 00:10:24.066 rmmod nvme_keyring 00:10:24.326 11:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:24.326 11:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:24.326 11:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:24.326 11:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2601326 ']' 00:10:24.326 11:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2601326 00:10:24.326 11:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2601326 ']' 00:10:24.326 11:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2601326 00:10:24.326 11:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:24.326 11:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:24.326 11:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2601326 00:10:24.326 11:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:24.326 11:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:24.326 11:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2601326' 00:10:24.326 killing process with pid 2601326 00:10:24.326 11:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2601326 00:10:24.326 11:11:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2601326 00:10:24.326 11:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:24.326 11:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:24.326 11:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:24.326 11:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:24.326 11:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:24.326 11:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:24.326 11:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:24.327 11:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:24.327 11:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:24.327 11:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.327 11:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:24.327 11:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:26.874 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:26.874 00:10:26.874 real 0m12.451s 00:10:26.874 user 0m14.191s 00:10:26.874 sys 0m6.350s 00:10:26.874 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:26.874 11:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:26.874 ************************************ 00:10:26.874 END TEST nvmf_bdevio 00:10:26.875 ************************************ 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:26.875 00:10:26.875 real 5m4.460s 00:10:26.875 user 11m56.214s 00:10:26.875 sys 1m52.331s 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:26.875 ************************************ 00:10:26.875 END TEST nvmf_target_core 00:10:26.875 ************************************ 00:10:26.875 11:11:19 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:26.875 11:11:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:26.875 11:11:19 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.875 11:11:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:26.875 ************************************ 00:10:26.875 START TEST nvmf_target_extra 00:10:26.875 ************************************ 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:26.875 * Looking for test storage... 00:10:26.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:26.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.875 --rc genhtml_branch_coverage=1 00:10:26.875 --rc genhtml_function_coverage=1 00:10:26.875 --rc genhtml_legend=1 00:10:26.875 --rc geninfo_all_blocks=1 00:10:26.875 --rc geninfo_unexecuted_blocks=1 00:10:26.875 00:10:26.875 ' 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:26.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.875 --rc genhtml_branch_coverage=1 00:10:26.875 --rc genhtml_function_coverage=1 00:10:26.875 --rc genhtml_legend=1 00:10:26.875 --rc geninfo_all_blocks=1 00:10:26.875 --rc geninfo_unexecuted_blocks=1 00:10:26.875 00:10:26.875 ' 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:26.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.875 --rc genhtml_branch_coverage=1 00:10:26.875 --rc genhtml_function_coverage=1 00:10:26.875 --rc genhtml_legend=1 00:10:26.875 --rc geninfo_all_blocks=1 00:10:26.875 --rc geninfo_unexecuted_blocks=1 00:10:26.875 00:10:26.875 ' 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:26.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.875 --rc genhtml_branch_coverage=1 00:10:26.875 --rc genhtml_function_coverage=1 00:10:26.875 --rc genhtml_legend=1 00:10:26.875 --rc geninfo_all_blocks=1 00:10:26.875 --rc geninfo_unexecuted_blocks=1 00:10:26.875 00:10:26.875 ' 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:26.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:26.875 ************************************ 00:10:26.875 START TEST nvmf_example 00:10:26.875 ************************************ 00:10:26.875 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:26.875 * Looking for test storage... 00:10:26.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:26.876 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:26.876 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:10:26.876 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:27.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.137 --rc genhtml_branch_coverage=1 00:10:27.137 --rc genhtml_function_coverage=1 00:10:27.137 --rc genhtml_legend=1 00:10:27.137 --rc geninfo_all_blocks=1 00:10:27.137 --rc geninfo_unexecuted_blocks=1 00:10:27.137 00:10:27.137 ' 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:27.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.137 --rc genhtml_branch_coverage=1 00:10:27.137 --rc genhtml_function_coverage=1 00:10:27.137 --rc genhtml_legend=1 00:10:27.137 --rc geninfo_all_blocks=1 00:10:27.137 --rc geninfo_unexecuted_blocks=1 00:10:27.137 00:10:27.137 ' 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:27.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.137 --rc genhtml_branch_coverage=1 00:10:27.137 --rc genhtml_function_coverage=1 00:10:27.137 --rc genhtml_legend=1 00:10:27.137 --rc geninfo_all_blocks=1 00:10:27.137 --rc geninfo_unexecuted_blocks=1 00:10:27.137 00:10:27.137 ' 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:27.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.137 --rc genhtml_branch_coverage=1 00:10:27.137 --rc genhtml_function_coverage=1 00:10:27.137 --rc genhtml_legend=1 00:10:27.137 --rc geninfo_all_blocks=1 00:10:27.137 --rc geninfo_unexecuted_blocks=1 00:10:27.137 00:10:27.137 ' 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:27.137 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:27.137 11:11:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:35.282 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:35.282 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:35.282 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:35.282 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:35.282 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:35.282 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:35.282 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:35.282 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:35.282 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:35.282 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:35.282 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:35.282 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:35.282 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:35.282 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:35.282 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:35.282 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:35.282 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:35.282 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:35.282 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:35.282 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:35.282 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:35.282 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:35.282 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:35.282 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:35.283 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:35.283 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:35.283 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:35.283 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:35.283 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:35.283 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:35.283 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:35.283 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:35.283 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:35.283 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:35.283 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:35.283 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:35.283 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:35.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:35.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:10:35.283 00:10:35.283 --- 10.0.0.2 ping statistics --- 00:10:35.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.283 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:10:35.283 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:35.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:35.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:10:35.283 00:10:35.283 --- 10.0.0.1 ping statistics --- 00:10:35.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.283 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:10:35.283 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:35.283 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:35.283 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:35.283 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:35.283 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:35.283 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:35.283 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:35.283 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:35.283 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:35.283 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:35.283 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:35.283 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:35.283 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:35.283 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:35.283 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:35.283 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2606086 00:10:35.283 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:35.283 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:35.284 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2606086 00:10:35.284 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2606086 ']' 00:10:35.284 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.284 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:35.284 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.284 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:35.284 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:35.545 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:35.545 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:35.545 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:35.545 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:35.545 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:35.545 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:35.545 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.545 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:35.545 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.545 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:35.545 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.545 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:35.545 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.545 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:35.545 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:35.545 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.545 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:35.545 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.545 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:35.545 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:35.545 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.545 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:35.808 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.808 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:35.808 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.808 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:35.808 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.808 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:35.808 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:45.805 Initializing NVMe Controllers 00:10:45.805 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:45.805 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:45.805 Initialization complete. Launching workers. 00:10:45.805 ======================================================== 00:10:45.805 Latency(us) 00:10:45.805 Device Information : IOPS MiB/s Average min max 00:10:45.805 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18737.10 73.19 3415.55 625.07 15636.11 00:10:45.805 ======================================================== 00:10:45.805 Total : 18737.10 73.19 3415.55 625.07 15636.11 00:10:45.805 00:10:45.805 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:45.805 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:45.805 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:45.805 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:45.805 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:45.805 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:45.805 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:45.805 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:45.805 rmmod nvme_tcp 00:10:45.805 rmmod nvme_fabrics 00:10:46.066 rmmod nvme_keyring 00:10:46.066 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:46.066 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:46.066 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:46.066 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2606086 ']' 00:10:46.066 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2606086 00:10:46.066 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2606086 ']' 00:10:46.066 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2606086 00:10:46.066 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:46.066 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:46.066 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2606086 00:10:46.066 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:46.066 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:46.066 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2606086' 00:10:46.066 killing process with pid 2606086 00:10:46.066 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2606086 00:10:46.066 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2606086 00:10:46.066 nvmf threads initialize successfully 00:10:46.066 bdev subsystem init successfully 00:10:46.066 created a nvmf target service 00:10:46.066 create targets's poll groups done 00:10:46.066 all subsystems of target started 00:10:46.066 nvmf target is running 00:10:46.066 all subsystems of target stopped 00:10:46.066 destroy targets's poll groups done 00:10:46.066 destroyed the nvmf target service 00:10:46.066 bdev subsystem finish successfully 00:10:46.066 nvmf threads destroy successfully 00:10:46.066 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:46.066 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:46.066 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:46.066 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:46.066 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:46.066 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:46.066 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:46.066 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:46.066 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:46.066 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.066 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.066 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.609 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:48.609 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:48.609 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:48.609 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:48.609 00:10:48.609 real 0m21.377s 00:10:48.609 user 0m46.449s 00:10:48.609 sys 0m7.030s 00:10:48.609 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.609 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:48.609 ************************************ 00:10:48.609 END TEST nvmf_example 00:10:48.609 ************************************ 00:10:48.609 11:11:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:48.609 11:11:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:48.609 11:11:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.609 11:11:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:48.609 ************************************ 00:10:48.609 START TEST nvmf_filesystem 00:10:48.609 ************************************ 00:10:48.609 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:48.609 * Looking for test storage... 00:10:48.609 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:48.609 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:48.609 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:48.609 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:48.609 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:48.609 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:48.609 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:48.609 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:48.609 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:48.609 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:48.609 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:48.609 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:48.609 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:48.609 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:48.609 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:48.609 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:48.609 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:48.609 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:48.609 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:48.609 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:48.609 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:48.609 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:48.609 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:48.609 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:48.609 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:48.609 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:48.609 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:48.609 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:48.609 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:48.609 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:48.609 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:48.609 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:48.609 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:48.609 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:48.609 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:48.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.609 --rc genhtml_branch_coverage=1 00:10:48.609 --rc genhtml_function_coverage=1 00:10:48.609 --rc genhtml_legend=1 00:10:48.609 --rc geninfo_all_blocks=1 00:10:48.609 --rc geninfo_unexecuted_blocks=1 00:10:48.609 00:10:48.609 ' 00:10:48.609 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:48.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.609 --rc genhtml_branch_coverage=1 00:10:48.609 --rc genhtml_function_coverage=1 00:10:48.609 --rc genhtml_legend=1 00:10:48.609 --rc geninfo_all_blocks=1 00:10:48.609 --rc geninfo_unexecuted_blocks=1 00:10:48.609 00:10:48.609 ' 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:48.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.610 --rc genhtml_branch_coverage=1 00:10:48.610 --rc genhtml_function_coverage=1 00:10:48.610 --rc genhtml_legend=1 00:10:48.610 --rc geninfo_all_blocks=1 00:10:48.610 --rc geninfo_unexecuted_blocks=1 00:10:48.610 00:10:48.610 ' 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:48.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.610 --rc genhtml_branch_coverage=1 00:10:48.610 --rc genhtml_function_coverage=1 00:10:48.610 --rc genhtml_legend=1 00:10:48.610 --rc geninfo_all_blocks=1 00:10:48.610 --rc geninfo_unexecuted_blocks=1 00:10:48.610 00:10:48.610 ' 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:48.610 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:48.611 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:48.611 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:48.611 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:48.611 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:48.611 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:48.611 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:48.611 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:48.611 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:48.611 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:48.611 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:48.611 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:48.611 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:48.611 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:48.611 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:48.611 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:48.611 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:48.611 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:48.611 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:48.611 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:48.611 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:48.611 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:48.611 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:48.611 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:48.611 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:48.611 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:48.611 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:48.611 #define SPDK_CONFIG_H 00:10:48.611 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:48.611 #define SPDK_CONFIG_APPS 1 00:10:48.611 #define SPDK_CONFIG_ARCH native 00:10:48.611 #undef SPDK_CONFIG_ASAN 00:10:48.611 #undef SPDK_CONFIG_AVAHI 00:10:48.611 #undef SPDK_CONFIG_CET 00:10:48.611 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:48.611 #define SPDK_CONFIG_COVERAGE 1 00:10:48.611 #define SPDK_CONFIG_CROSS_PREFIX 00:10:48.611 #undef SPDK_CONFIG_CRYPTO 00:10:48.611 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:48.611 #undef SPDK_CONFIG_CUSTOMOCF 00:10:48.611 #undef SPDK_CONFIG_DAOS 00:10:48.611 #define SPDK_CONFIG_DAOS_DIR 00:10:48.611 #define SPDK_CONFIG_DEBUG 1 00:10:48.611 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:48.611 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:48.611 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:48.611 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:48.611 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:48.611 #undef SPDK_CONFIG_DPDK_UADK 00:10:48.611 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:48.611 #define SPDK_CONFIG_EXAMPLES 1 00:10:48.611 #undef SPDK_CONFIG_FC 00:10:48.611 #define SPDK_CONFIG_FC_PATH 00:10:48.611 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:48.611 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:48.611 #define SPDK_CONFIG_FSDEV 1 00:10:48.611 #undef SPDK_CONFIG_FUSE 00:10:48.611 #undef SPDK_CONFIG_FUZZER 00:10:48.611 #define SPDK_CONFIG_FUZZER_LIB 00:10:48.611 #undef SPDK_CONFIG_GOLANG 00:10:48.611 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:48.611 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:48.611 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:48.611 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:48.611 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:48.611 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:48.611 #undef SPDK_CONFIG_HAVE_LZ4 00:10:48.611 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:48.611 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:48.611 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:48.611 #define SPDK_CONFIG_IDXD 1 00:10:48.611 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:48.611 #undef SPDK_CONFIG_IPSEC_MB 00:10:48.611 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:48.611 #define SPDK_CONFIG_ISAL 1 00:10:48.611 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:48.611 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:48.611 #define SPDK_CONFIG_LIBDIR 00:10:48.611 #undef SPDK_CONFIG_LTO 00:10:48.611 #define SPDK_CONFIG_MAX_LCORES 128 00:10:48.611 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:48.611 #define SPDK_CONFIG_NVME_CUSE 1 00:10:48.611 #undef SPDK_CONFIG_OCF 00:10:48.611 #define SPDK_CONFIG_OCF_PATH 00:10:48.611 #define SPDK_CONFIG_OPENSSL_PATH 00:10:48.611 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:48.611 #define SPDK_CONFIG_PGO_DIR 00:10:48.611 #undef SPDK_CONFIG_PGO_USE 00:10:48.611 #define SPDK_CONFIG_PREFIX /usr/local 00:10:48.611 #undef SPDK_CONFIG_RAID5F 00:10:48.611 #undef SPDK_CONFIG_RBD 00:10:48.611 #define SPDK_CONFIG_RDMA 1 00:10:48.611 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:48.611 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:48.611 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:48.611 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:48.611 #define SPDK_CONFIG_SHARED 1 00:10:48.611 #undef SPDK_CONFIG_SMA 00:10:48.611 #define SPDK_CONFIG_TESTS 1 00:10:48.611 #undef SPDK_CONFIG_TSAN 00:10:48.611 #define SPDK_CONFIG_UBLK 1 00:10:48.611 #define SPDK_CONFIG_UBSAN 1 00:10:48.611 #undef SPDK_CONFIG_UNIT_TESTS 00:10:48.611 #undef SPDK_CONFIG_URING 00:10:48.611 #define SPDK_CONFIG_URING_PATH 00:10:48.611 #undef SPDK_CONFIG_URING_ZNS 00:10:48.611 #undef SPDK_CONFIG_USDT 00:10:48.611 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:48.611 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:48.611 #define SPDK_CONFIG_VFIO_USER 1 00:10:48.611 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:48.611 #define SPDK_CONFIG_VHOST 1 00:10:48.611 #define SPDK_CONFIG_VIRTIO 1 00:10:48.611 #undef SPDK_CONFIG_VTUNE 00:10:48.611 #define SPDK_CONFIG_VTUNE_DIR 00:10:48.611 #define SPDK_CONFIG_WERROR 1 00:10:48.611 #define SPDK_CONFIG_WPDK_DIR 00:10:48.611 #undef SPDK_CONFIG_XNVME 00:10:48.611 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:48.611 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:48.611 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:48.611 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:48.611 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:48.611 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:48.611 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:48.611 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.611 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.611 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.611 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:48.612 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:48.613 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2608878 ]] 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2608878 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.dAbiqZ 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.dAbiqZ/tests/target /tmp/spdk.dAbiqZ 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=118308642816 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356509184 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11047866368 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:48.614 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64666886144 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678252544 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847934976 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871302656 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23367680 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=216064 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=287744 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677769216 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678256640 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=487424 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935634944 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935647232 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:48.615 * Looking for test storage... 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=118308642816 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=13262458880 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:48.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:48.615 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:48.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.878 --rc genhtml_branch_coverage=1 00:10:48.878 --rc genhtml_function_coverage=1 00:10:48.878 --rc genhtml_legend=1 00:10:48.878 --rc geninfo_all_blocks=1 00:10:48.878 --rc geninfo_unexecuted_blocks=1 00:10:48.878 00:10:48.878 ' 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:48.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.878 --rc genhtml_branch_coverage=1 00:10:48.878 --rc genhtml_function_coverage=1 00:10:48.878 --rc genhtml_legend=1 00:10:48.878 --rc geninfo_all_blocks=1 00:10:48.878 --rc geninfo_unexecuted_blocks=1 00:10:48.878 00:10:48.878 ' 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:48.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.878 --rc genhtml_branch_coverage=1 00:10:48.878 --rc genhtml_function_coverage=1 00:10:48.878 --rc genhtml_legend=1 00:10:48.878 --rc geninfo_all_blocks=1 00:10:48.878 --rc geninfo_unexecuted_blocks=1 00:10:48.878 00:10:48.878 ' 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:48.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.878 --rc genhtml_branch_coverage=1 00:10:48.878 --rc genhtml_function_coverage=1 00:10:48.878 --rc genhtml_legend=1 00:10:48.878 --rc geninfo_all_blocks=1 00:10:48.878 --rc geninfo_unexecuted_blocks=1 00:10:48.878 00:10:48.878 ' 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.878 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:48.879 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.879 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:48.879 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:48.879 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:48.879 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:48.879 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:48.879 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:48.879 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:48.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:48.879 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:48.879 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:48.879 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:48.879 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:48.879 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:48.879 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:48.879 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:48.879 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:48.879 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:48.879 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:48.879 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:48.879 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.879 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:48.879 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.879 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:48.879 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:48.879 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:48.879 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:57.016 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:57.016 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:57.016 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:57.016 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:57.016 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:57.017 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:57.017 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:57.017 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:57.017 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:57.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:57.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.532 ms 00:10:57.017 00:10:57.017 --- 10.0.0.2 ping statistics --- 00:10:57.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.017 rtt min/avg/max/mdev = 0.532/0.532/0.532/0.000 ms 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:57.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:57.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:10:57.017 00:10:57.017 --- 10.0.0.1 ping statistics --- 00:10:57.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.017 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:57.017 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:57.018 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:57.018 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:57.018 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:57.018 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:57.018 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:57.018 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:57.018 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.018 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:57.018 ************************************ 00:10:57.018 START TEST nvmf_filesystem_no_in_capsule 00:10:57.018 ************************************ 00:10:57.018 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:57.018 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:57.018 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:57.018 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:57.018 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:57.018 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.018 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2612757 00:10:57.018 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2612757 00:10:57.018 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:57.018 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2612757 ']' 00:10:57.018 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.018 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:57.018 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.018 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:57.018 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.018 [2024-11-20 11:11:49.058188] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:10:57.018 [2024-11-20 11:11:49.058253] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:57.018 [2024-11-20 11:11:49.159164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:57.018 [2024-11-20 11:11:49.212519] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:57.018 [2024-11-20 11:11:49.212570] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:57.018 [2024-11-20 11:11:49.212579] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:57.018 [2024-11-20 11:11:49.212586] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:57.018 [2024-11-20 11:11:49.212593] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:57.018 [2024-11-20 11:11:49.214933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.018 [2024-11-20 11:11:49.215100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:57.018 [2024-11-20 11:11:49.215261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:57.018 [2024-11-20 11:11:49.215415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.279 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:57.279 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:57.279 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:57.279 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:57.279 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.279 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:57.279 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:57.279 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:57.279 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.279 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.279 [2024-11-20 11:11:49.935346] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:57.279 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.279 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:57.279 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.279 11:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.541 Malloc1 00:10:57.541 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.541 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:57.541 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.541 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.541 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.541 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:57.541 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.541 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.541 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.541 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:57.541 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.541 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.541 [2024-11-20 11:11:50.079373] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:57.541 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.541 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:57.541 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:57.541 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:57.541 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:57.541 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:57.541 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:57.541 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.541 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.541 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.541 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:57.541 { 00:10:57.541 "name": "Malloc1", 00:10:57.541 "aliases": [ 00:10:57.541 "a76418a8-5dc6-4f7a-bee4-83252d102e32" 00:10:57.541 ], 00:10:57.541 "product_name": "Malloc disk", 00:10:57.541 "block_size": 512, 00:10:57.541 "num_blocks": 1048576, 00:10:57.541 "uuid": "a76418a8-5dc6-4f7a-bee4-83252d102e32", 00:10:57.541 "assigned_rate_limits": { 00:10:57.541 "rw_ios_per_sec": 0, 00:10:57.541 "rw_mbytes_per_sec": 0, 00:10:57.541 "r_mbytes_per_sec": 0, 00:10:57.541 "w_mbytes_per_sec": 0 00:10:57.541 }, 00:10:57.541 "claimed": true, 00:10:57.541 "claim_type": "exclusive_write", 00:10:57.541 "zoned": false, 00:10:57.541 "supported_io_types": { 00:10:57.541 "read": true, 00:10:57.541 "write": true, 00:10:57.541 "unmap": true, 00:10:57.541 "flush": true, 00:10:57.541 "reset": true, 00:10:57.541 "nvme_admin": false, 00:10:57.541 "nvme_io": false, 00:10:57.541 "nvme_io_md": false, 00:10:57.541 "write_zeroes": true, 00:10:57.541 "zcopy": true, 00:10:57.541 "get_zone_info": false, 00:10:57.541 "zone_management": false, 00:10:57.541 "zone_append": false, 00:10:57.541 "compare": false, 00:10:57.541 "compare_and_write": false, 00:10:57.541 "abort": true, 00:10:57.541 "seek_hole": false, 00:10:57.541 "seek_data": false, 00:10:57.541 "copy": true, 00:10:57.541 "nvme_iov_md": false 00:10:57.541 }, 00:10:57.541 "memory_domains": [ 00:10:57.541 { 00:10:57.541 "dma_device_id": "system", 00:10:57.541 "dma_device_type": 1 00:10:57.541 }, 00:10:57.541 { 00:10:57.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.541 "dma_device_type": 2 00:10:57.541 } 00:10:57.541 ], 00:10:57.541 "driver_specific": {} 00:10:57.541 } 00:10:57.541 ]' 00:10:57.541 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:57.541 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:57.542 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:57.542 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:57.542 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:57.542 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:57.542 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:57.542 11:11:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:59.457 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:59.457 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:59.457 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:59.457 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:59.457 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:01.371 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:01.371 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:01.371 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:01.371 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:01.371 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:01.371 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:01.371 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:01.371 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:01.371 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:01.371 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:01.371 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:01.371 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:01.371 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:01.371 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:01.371 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:01.371 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:01.371 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:01.631 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:01.631 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:02.570 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:02.570 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:02.570 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:02.570 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.570 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.830 ************************************ 00:11:02.830 START TEST filesystem_ext4 00:11:02.830 ************************************ 00:11:02.830 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:02.830 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:02.830 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:02.830 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:02.830 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:02.830 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:02.830 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:02.830 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:02.830 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:02.830 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:02.830 11:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:02.830 mke2fs 1.47.0 (5-Feb-2023) 00:11:02.830 Discarding device blocks: 0/522240 done 00:11:02.830 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:02.830 Filesystem UUID: b7ab2a4f-3dd5-499a-aaca-65dcf8f0fb48 00:11:02.830 Superblock backups stored on blocks: 00:11:02.830 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:02.830 00:11:02.830 Allocating group tables: 0/64 done 00:11:02.830 Writing inode tables: 0/64 done 00:11:03.090 Creating journal (8192 blocks): done 00:11:05.305 Writing superblocks and filesystem accounting information: 0/64 1/64 done 00:11:05.305 00:11:05.306 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:05.306 11:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:10.590 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:10.590 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:10.590 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:10.590 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:10.590 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:10.590 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:10.590 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2612757 00:11:10.590 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:10.590 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:10.590 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:10.590 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:10.590 00:11:10.590 real 0m7.960s 00:11:10.590 user 0m0.033s 00:11:10.590 sys 0m0.075s 00:11:10.590 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.590 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:10.590 ************************************ 00:11:10.590 END TEST filesystem_ext4 00:11:10.590 ************************************ 00:11:10.590 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:10.590 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:10.590 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.590 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.851 ************************************ 00:11:10.851 START TEST filesystem_btrfs 00:11:10.851 ************************************ 00:11:10.851 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:10.851 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:10.851 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:10.851 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:10.851 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:10.851 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:10.851 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:10.851 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:10.851 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:10.851 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:10.851 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:10.851 btrfs-progs v6.8.1 00:11:10.851 See https://btrfs.readthedocs.io for more information. 00:11:10.851 00:11:10.851 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:10.851 NOTE: several default settings have changed in version 5.15, please make sure 00:11:10.851 this does not affect your deployments: 00:11:10.851 - DUP for metadata (-m dup) 00:11:10.851 - enabled no-holes (-O no-holes) 00:11:10.851 - enabled free-space-tree (-R free-space-tree) 00:11:10.851 00:11:10.851 Label: (null) 00:11:10.851 UUID: 151a8290-9e50-4d3d-b92c-fd92206bd057 00:11:10.851 Node size: 16384 00:11:10.851 Sector size: 4096 (CPU page size: 4096) 00:11:10.851 Filesystem size: 510.00MiB 00:11:10.851 Block group profiles: 00:11:10.851 Data: single 8.00MiB 00:11:10.851 Metadata: DUP 32.00MiB 00:11:10.851 System: DUP 8.00MiB 00:11:10.851 SSD detected: yes 00:11:10.851 Zoned device: no 00:11:10.851 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:10.851 Checksum: crc32c 00:11:10.851 Number of devices: 1 00:11:10.851 Devices: 00:11:10.851 ID SIZE PATH 00:11:10.851 1 510.00MiB /dev/nvme0n1p1 00:11:10.851 00:11:10.851 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:10.851 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:11.792 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:11.792 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:11.792 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:11.792 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:11.792 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:11.792 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:11.792 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2612757 00:11:11.792 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:11.792 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:11.792 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:11.792 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:11.792 00:11:11.792 real 0m1.012s 00:11:11.792 user 0m0.025s 00:11:11.792 sys 0m0.121s 00:11:11.792 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.792 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:11.792 ************************************ 00:11:11.792 END TEST filesystem_btrfs 00:11:11.792 ************************************ 00:11:11.792 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:11.792 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:11.792 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.792 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.792 ************************************ 00:11:11.792 START TEST filesystem_xfs 00:11:11.792 ************************************ 00:11:11.793 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:11.793 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:11.793 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:11.793 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:11.793 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:11.793 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:11.793 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:11.793 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:11.793 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:11.793 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:11.793 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:11.793 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:11.793 = sectsz=512 attr=2, projid32bit=1 00:11:11.793 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:11.793 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:11.793 data = bsize=4096 blocks=130560, imaxpct=25 00:11:11.793 = sunit=0 swidth=0 blks 00:11:11.793 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:11.793 log =internal log bsize=4096 blocks=16384, version=2 00:11:11.793 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:11.793 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:13.176 Discarding blocks...Done. 00:11:13.176 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:13.176 11:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:15.087 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:15.087 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:15.087 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:15.087 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:15.087 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:15.087 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:15.087 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2612757 00:11:15.087 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:15.087 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:15.087 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:15.087 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:15.087 00:11:15.087 real 0m2.978s 00:11:15.087 user 0m0.028s 00:11:15.087 sys 0m0.076s 00:11:15.087 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.087 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:15.087 ************************************ 00:11:15.087 END TEST filesystem_xfs 00:11:15.087 ************************************ 00:11:15.087 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:15.087 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:15.660 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:15.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.660 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:15.660 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:15.660 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:15.660 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.660 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:15.660 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.660 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:15.660 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:15.660 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.660 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.660 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.660 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:15.660 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2612757 00:11:15.660 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2612757 ']' 00:11:15.660 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2612757 00:11:15.660 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:15.660 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:15.660 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2612757 00:11:15.660 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:15.660 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:15.660 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2612757' 00:11:15.660 killing process with pid 2612757 00:11:15.660 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2612757 00:11:15.660 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2612757 00:11:15.921 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:15.921 00:11:15.921 real 0m19.574s 00:11:15.921 user 1m17.345s 00:11:15.921 sys 0m1.440s 00:11:15.921 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.921 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.921 ************************************ 00:11:15.921 END TEST nvmf_filesystem_no_in_capsule 00:11:15.921 ************************************ 00:11:15.921 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:15.921 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:15.921 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.921 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:15.921 ************************************ 00:11:15.921 START TEST nvmf_filesystem_in_capsule 00:11:15.921 ************************************ 00:11:15.921 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:15.921 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:15.921 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:15.921 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:15.921 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:15.921 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.921 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2616876 00:11:15.921 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2616876 00:11:15.921 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:15.921 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2616876 ']' 00:11:15.921 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.921 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:15.921 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.921 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:15.921 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.183 [2024-11-20 11:12:08.706284] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:11:16.183 [2024-11-20 11:12:08.706331] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:16.183 [2024-11-20 11:12:08.776027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:16.183 [2024-11-20 11:12:08.805883] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:16.183 [2024-11-20 11:12:08.805913] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:16.183 [2024-11-20 11:12:08.805918] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:16.183 [2024-11-20 11:12:08.805924] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:16.183 [2024-11-20 11:12:08.805928] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:16.183 [2024-11-20 11:12:08.807194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.183 [2024-11-20 11:12:08.807225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:16.183 [2024-11-20 11:12:08.807476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.183 [2024-11-20 11:12:08.807277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:16.183 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:16.183 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:16.183 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:16.183 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:16.183 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.452 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:16.452 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:16.452 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:16.452 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.452 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.452 [2024-11-20 11:12:08.942860] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:16.452 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.452 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:16.452 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.452 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.452 Malloc1 00:11:16.452 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.452 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:16.452 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.452 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.452 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.452 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:16.452 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.452 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.452 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.452 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:16.452 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.452 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.452 [2024-11-20 11:12:09.063395] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:16.452 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.452 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:16.452 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:16.452 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:16.452 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:16.452 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:16.452 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:16.452 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.452 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.452 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.452 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:16.452 { 00:11:16.452 "name": "Malloc1", 00:11:16.452 "aliases": [ 00:11:16.452 "36b22c92-aa58-47f0-870e-5dc76016cd2f" 00:11:16.452 ], 00:11:16.452 "product_name": "Malloc disk", 00:11:16.452 "block_size": 512, 00:11:16.452 "num_blocks": 1048576, 00:11:16.452 "uuid": "36b22c92-aa58-47f0-870e-5dc76016cd2f", 00:11:16.452 "assigned_rate_limits": { 00:11:16.452 "rw_ios_per_sec": 0, 00:11:16.452 "rw_mbytes_per_sec": 0, 00:11:16.452 "r_mbytes_per_sec": 0, 00:11:16.452 "w_mbytes_per_sec": 0 00:11:16.452 }, 00:11:16.452 "claimed": true, 00:11:16.452 "claim_type": "exclusive_write", 00:11:16.452 "zoned": false, 00:11:16.452 "supported_io_types": { 00:11:16.452 "read": true, 00:11:16.452 "write": true, 00:11:16.452 "unmap": true, 00:11:16.452 "flush": true, 00:11:16.452 "reset": true, 00:11:16.452 "nvme_admin": false, 00:11:16.452 "nvme_io": false, 00:11:16.452 "nvme_io_md": false, 00:11:16.452 "write_zeroes": true, 00:11:16.452 "zcopy": true, 00:11:16.452 "get_zone_info": false, 00:11:16.452 "zone_management": false, 00:11:16.452 "zone_append": false, 00:11:16.452 "compare": false, 00:11:16.452 "compare_and_write": false, 00:11:16.452 "abort": true, 00:11:16.452 "seek_hole": false, 00:11:16.452 "seek_data": false, 00:11:16.452 "copy": true, 00:11:16.452 "nvme_iov_md": false 00:11:16.452 }, 00:11:16.452 "memory_domains": [ 00:11:16.452 { 00:11:16.452 "dma_device_id": "system", 00:11:16.452 "dma_device_type": 1 00:11:16.453 }, 00:11:16.453 { 00:11:16.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.453 "dma_device_type": 2 00:11:16.453 } 00:11:16.453 ], 00:11:16.453 "driver_specific": {} 00:11:16.453 } 00:11:16.453 ]' 00:11:16.453 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:16.453 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:16.453 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:16.453 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:16.453 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:16.453 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:16.453 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:16.453 11:12:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:18.452 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:18.452 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:18.452 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:18.452 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:18.452 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:20.363 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:20.363 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:20.363 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:20.363 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:20.363 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:20.363 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:20.363 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:20.363 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:20.363 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:20.363 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:20.363 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:20.363 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:20.363 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:20.363 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:20.363 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:20.363 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:20.363 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:20.625 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:20.625 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:21.569 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:21.569 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:21.569 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:21.569 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.569 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.569 ************************************ 00:11:21.569 START TEST filesystem_in_capsule_ext4 00:11:21.569 ************************************ 00:11:21.569 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:21.569 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:21.569 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:21.569 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:21.569 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:21.569 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:21.569 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:21.569 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:21.569 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:21.569 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:21.569 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:21.569 mke2fs 1.47.0 (5-Feb-2023) 00:11:21.830 Discarding device blocks: 0/522240 done 00:11:21.830 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:21.830 Filesystem UUID: 3f8fda81-6dd2-49ee-be97-de8e81adec14 00:11:21.830 Superblock backups stored on blocks: 00:11:21.830 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:21.830 00:11:21.830 Allocating group tables: 0/64 done 00:11:21.830 Writing inode tables: 0/64 done 00:11:25.131 Creating journal (8192 blocks): done 00:11:25.131 Writing superblocks and filesystem accounting information: 0/64 done 00:11:25.131 00:11:25.131 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:25.131 11:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:30.419 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:30.419 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:30.419 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:30.419 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:30.419 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:30.419 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:30.419 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2616876 00:11:30.419 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:30.419 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:30.419 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:30.419 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:30.419 00:11:30.419 real 0m8.511s 00:11:30.419 user 0m0.019s 00:11:30.419 sys 0m0.089s 00:11:30.419 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:30.419 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:30.419 ************************************ 00:11:30.419 END TEST filesystem_in_capsule_ext4 00:11:30.419 ************************************ 00:11:30.419 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:30.419 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:30.419 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:30.419 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:30.419 ************************************ 00:11:30.419 START TEST filesystem_in_capsule_btrfs 00:11:30.419 ************************************ 00:11:30.419 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:30.419 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:30.419 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:30.419 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:30.419 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:30.419 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:30.419 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:30.419 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:30.419 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:30.419 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:30.419 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:30.681 btrfs-progs v6.8.1 00:11:30.681 See https://btrfs.readthedocs.io for more information. 00:11:30.681 00:11:30.681 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:30.681 NOTE: several default settings have changed in version 5.15, please make sure 00:11:30.681 this does not affect your deployments: 00:11:30.681 - DUP for metadata (-m dup) 00:11:30.681 - enabled no-holes (-O no-holes) 00:11:30.681 - enabled free-space-tree (-R free-space-tree) 00:11:30.681 00:11:30.681 Label: (null) 00:11:30.681 UUID: 03ffcddc-ae6a-4c44-b891-a7fd66f4afe2 00:11:30.681 Node size: 16384 00:11:30.681 Sector size: 4096 (CPU page size: 4096) 00:11:30.681 Filesystem size: 510.00MiB 00:11:30.681 Block group profiles: 00:11:30.681 Data: single 8.00MiB 00:11:30.681 Metadata: DUP 32.00MiB 00:11:30.681 System: DUP 8.00MiB 00:11:30.681 SSD detected: yes 00:11:30.681 Zoned device: no 00:11:30.681 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:30.681 Checksum: crc32c 00:11:30.681 Number of devices: 1 00:11:30.681 Devices: 00:11:30.681 ID SIZE PATH 00:11:30.681 1 510.00MiB /dev/nvme0n1p1 00:11:30.681 00:11:30.681 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:30.681 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:30.942 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:30.942 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:30.942 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:30.942 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:30.942 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:30.942 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:30.942 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2616876 00:11:30.942 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:30.942 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:30.942 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:30.942 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:30.942 00:11:30.942 real 0m0.779s 00:11:30.942 user 0m0.021s 00:11:30.942 sys 0m0.126s 00:11:30.942 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:30.942 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:30.942 ************************************ 00:11:30.942 END TEST filesystem_in_capsule_btrfs 00:11:30.942 ************************************ 00:11:31.202 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:31.202 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:31.202 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:31.202 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.202 ************************************ 00:11:31.202 START TEST filesystem_in_capsule_xfs 00:11:31.202 ************************************ 00:11:31.202 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:31.202 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:31.202 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:31.202 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:31.202 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:31.202 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:31.202 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:31.202 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:31.202 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:31.202 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:31.202 11:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:31.774 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:31.774 = sectsz=512 attr=2, projid32bit=1 00:11:31.774 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:31.774 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:31.774 data = bsize=4096 blocks=130560, imaxpct=25 00:11:31.774 = sunit=0 swidth=0 blks 00:11:31.774 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:31.774 log =internal log bsize=4096 blocks=16384, version=2 00:11:31.774 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:31.774 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:32.344 Discarding blocks...Done. 00:11:32.344 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:32.344 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:34.885 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:34.885 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:34.885 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:34.885 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:34.885 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:34.885 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:34.885 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2616876 00:11:34.885 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:34.885 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:34.885 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:34.885 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:34.885 00:11:34.885 real 0m3.891s 00:11:34.885 user 0m0.027s 00:11:34.885 sys 0m0.079s 00:11:34.885 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.885 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:34.885 ************************************ 00:11:34.885 END TEST filesystem_in_capsule_xfs 00:11:34.885 ************************************ 00:11:35.145 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:35.405 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:35.405 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:35.406 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.406 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:35.406 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:35.406 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:35.406 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:35.406 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:35.406 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:35.406 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:35.406 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:35.406 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.406 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.406 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.406 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:35.406 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2616876 00:11:35.406 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2616876 ']' 00:11:35.406 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2616876 00:11:35.406 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:35.406 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:35.406 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2616876 00:11:35.666 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:35.666 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:35.666 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2616876' 00:11:35.666 killing process with pid 2616876 00:11:35.666 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2616876 00:11:35.666 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2616876 00:11:35.666 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:35.666 00:11:35.666 real 0m19.750s 00:11:35.666 user 1m18.171s 00:11:35.666 sys 0m1.351s 00:11:35.666 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.666 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.666 ************************************ 00:11:35.666 END TEST nvmf_filesystem_in_capsule 00:11:35.666 ************************************ 00:11:35.927 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:35.927 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:35.927 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:35.927 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:35.927 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:35.927 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:35.927 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:35.927 rmmod nvme_tcp 00:11:35.927 rmmod nvme_fabrics 00:11:35.927 rmmod nvme_keyring 00:11:35.927 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:35.927 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:35.927 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:35.927 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:35.927 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:35.927 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:35.927 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:35.927 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:35.927 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:35.927 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:35.927 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:35.927 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:35.927 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:35.927 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.927 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.927 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:38.477 00:11:38.477 real 0m49.654s 00:11:38.477 user 2m38.016s 00:11:38.477 sys 0m8.584s 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:38.477 ************************************ 00:11:38.477 END TEST nvmf_filesystem 00:11:38.477 ************************************ 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:38.477 ************************************ 00:11:38.477 START TEST nvmf_target_discovery 00:11:38.477 ************************************ 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:38.477 * Looking for test storage... 00:11:38.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:38.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.477 --rc genhtml_branch_coverage=1 00:11:38.477 --rc genhtml_function_coverage=1 00:11:38.477 --rc genhtml_legend=1 00:11:38.477 --rc geninfo_all_blocks=1 00:11:38.477 --rc geninfo_unexecuted_blocks=1 00:11:38.477 00:11:38.477 ' 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:38.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.477 --rc genhtml_branch_coverage=1 00:11:38.477 --rc genhtml_function_coverage=1 00:11:38.477 --rc genhtml_legend=1 00:11:38.477 --rc geninfo_all_blocks=1 00:11:38.477 --rc geninfo_unexecuted_blocks=1 00:11:38.477 00:11:38.477 ' 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:38.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.477 --rc genhtml_branch_coverage=1 00:11:38.477 --rc genhtml_function_coverage=1 00:11:38.477 --rc genhtml_legend=1 00:11:38.477 --rc geninfo_all_blocks=1 00:11:38.477 --rc geninfo_unexecuted_blocks=1 00:11:38.477 00:11:38.477 ' 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:38.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.477 --rc genhtml_branch_coverage=1 00:11:38.477 --rc genhtml_function_coverage=1 00:11:38.477 --rc genhtml_legend=1 00:11:38.477 --rc geninfo_all_blocks=1 00:11:38.477 --rc geninfo_unexecuted_blocks=1 00:11:38.477 00:11:38.477 ' 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:38.477 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:38.478 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.478 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.478 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.478 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:38.478 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.478 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:38.478 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:38.478 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:38.478 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:38.478 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:38.478 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:38.478 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:38.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:38.478 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:38.478 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:38.478 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:38.478 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:38.478 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:38.478 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:38.478 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:38.478 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:38.478 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:38.478 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:38.478 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:38.478 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:38.478 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:38.478 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.478 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:38.478 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.478 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:38.478 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:38.478 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:38.478 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:46.623 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:46.623 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:46.623 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:46.623 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:46.623 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:46.623 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:46.623 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:46.623 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:46.623 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:46.623 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:46.623 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:46.623 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:46.623 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:46.623 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:46.623 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:46.623 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:46.623 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:46.623 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:46.623 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:46.623 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:46.623 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:46.623 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:46.623 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:46.623 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:46.623 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:46.623 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:46.624 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:46.624 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:46.624 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:46.624 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:46.624 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:46.625 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:46.625 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:46.625 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:46.625 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:46.625 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:46.625 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:46.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:46.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:11:46.625 00:11:46.625 --- 10.0.0.2 ping statistics --- 00:11:46.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.625 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:11:46.625 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:46.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:46.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:11:46.625 00:11:46.625 --- 10.0.0.1 ping statistics --- 00:11:46.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.625 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:11:46.625 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:46.625 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:46.625 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:46.625 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:46.625 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:46.625 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:46.625 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:46.625 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:46.625 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:46.625 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:46.625 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:46.625 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:46.625 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:46.625 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2625580 00:11:46.625 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2625580 00:11:46.625 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2625580 ']' 00:11:46.625 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:46.625 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.625 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:46.625 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.625 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:46.625 11:12:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:46.625 [2024-11-20 11:12:38.550273] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:11:46.625 [2024-11-20 11:12:38.550337] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:46.625 [2024-11-20 11:12:38.651142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:46.625 [2024-11-20 11:12:38.704425] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:46.625 [2024-11-20 11:12:38.704478] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:46.625 [2024-11-20 11:12:38.704487] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:46.625 [2024-11-20 11:12:38.704495] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:46.625 [2024-11-20 11:12:38.704501] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:46.625 [2024-11-20 11:12:38.706464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:46.625 [2024-11-20 11:12:38.706624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:46.625 [2024-11-20 11:12:38.706786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.625 [2024-11-20 11:12:38.706787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:46.888 [2024-11-20 11:12:39.430996] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:46.888 Null1 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:46.888 [2024-11-20 11:12:39.491504] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:46.888 Null2 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:46.888 Null3 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.888 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:46.889 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.889 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:46.889 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.889 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:46.889 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.889 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:46.889 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:46.889 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.889 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:46.889 Null4 00:11:46.889 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.889 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:46.889 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.889 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:46.889 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.889 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:46.889 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.889 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.151 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.151 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:47.151 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.151 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.151 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.151 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:47.151 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.151 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.151 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.151 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:47.151 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.151 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.151 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.151 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:11:47.151 00:11:47.151 Discovery Log Number of Records 6, Generation counter 6 00:11:47.151 =====Discovery Log Entry 0====== 00:11:47.151 trtype: tcp 00:11:47.151 adrfam: ipv4 00:11:47.151 subtype: current discovery subsystem 00:11:47.151 treq: not required 00:11:47.151 portid: 0 00:11:47.151 trsvcid: 4420 00:11:47.151 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:47.151 traddr: 10.0.0.2 00:11:47.151 eflags: explicit discovery connections, duplicate discovery information 00:11:47.151 sectype: none 00:11:47.151 =====Discovery Log Entry 1====== 00:11:47.151 trtype: tcp 00:11:47.151 adrfam: ipv4 00:11:47.151 subtype: nvme subsystem 00:11:47.151 treq: not required 00:11:47.151 portid: 0 00:11:47.151 trsvcid: 4420 00:11:47.151 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:47.151 traddr: 10.0.0.2 00:11:47.151 eflags: none 00:11:47.151 sectype: none 00:11:47.151 =====Discovery Log Entry 2====== 00:11:47.151 trtype: tcp 00:11:47.151 adrfam: ipv4 00:11:47.151 subtype: nvme subsystem 00:11:47.151 treq: not required 00:11:47.151 portid: 0 00:11:47.151 trsvcid: 4420 00:11:47.151 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:47.151 traddr: 10.0.0.2 00:11:47.151 eflags: none 00:11:47.151 sectype: none 00:11:47.151 =====Discovery Log Entry 3====== 00:11:47.151 trtype: tcp 00:11:47.151 adrfam: ipv4 00:11:47.151 subtype: nvme subsystem 00:11:47.151 treq: not required 00:11:47.151 portid: 0 00:11:47.151 trsvcid: 4420 00:11:47.151 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:47.152 traddr: 10.0.0.2 00:11:47.152 eflags: none 00:11:47.152 sectype: none 00:11:47.152 =====Discovery Log Entry 4====== 00:11:47.152 trtype: tcp 00:11:47.152 adrfam: ipv4 00:11:47.152 subtype: nvme subsystem 00:11:47.152 treq: not required 00:11:47.152 portid: 0 00:11:47.152 trsvcid: 4420 00:11:47.152 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:47.152 traddr: 10.0.0.2 00:11:47.152 eflags: none 00:11:47.152 sectype: none 00:11:47.152 =====Discovery Log Entry 5====== 00:11:47.152 trtype: tcp 00:11:47.152 adrfam: ipv4 00:11:47.152 subtype: discovery subsystem referral 00:11:47.152 treq: not required 00:11:47.152 portid: 0 00:11:47.152 trsvcid: 4430 00:11:47.152 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:47.152 traddr: 10.0.0.2 00:11:47.152 eflags: none 00:11:47.152 sectype: none 00:11:47.152 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:47.152 Perform nvmf subsystem discovery via RPC 00:11:47.152 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:47.152 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.152 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.152 [ 00:11:47.152 { 00:11:47.152 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:47.152 "subtype": "Discovery", 00:11:47.152 "listen_addresses": [ 00:11:47.152 { 00:11:47.152 "trtype": "TCP", 00:11:47.152 "adrfam": "IPv4", 00:11:47.152 "traddr": "10.0.0.2", 00:11:47.152 "trsvcid": "4420" 00:11:47.152 } 00:11:47.152 ], 00:11:47.152 "allow_any_host": true, 00:11:47.152 "hosts": [] 00:11:47.152 }, 00:11:47.152 { 00:11:47.152 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:47.152 "subtype": "NVMe", 00:11:47.152 "listen_addresses": [ 00:11:47.152 { 00:11:47.152 "trtype": "TCP", 00:11:47.152 "adrfam": "IPv4", 00:11:47.152 "traddr": "10.0.0.2", 00:11:47.152 "trsvcid": "4420" 00:11:47.152 } 00:11:47.152 ], 00:11:47.152 "allow_any_host": true, 00:11:47.152 "hosts": [], 00:11:47.152 "serial_number": "SPDK00000000000001", 00:11:47.152 "model_number": "SPDK bdev Controller", 00:11:47.152 "max_namespaces": 32, 00:11:47.152 "min_cntlid": 1, 00:11:47.152 "max_cntlid": 65519, 00:11:47.152 "namespaces": [ 00:11:47.152 { 00:11:47.152 "nsid": 1, 00:11:47.152 "bdev_name": "Null1", 00:11:47.152 "name": "Null1", 00:11:47.152 "nguid": "A3B99047355A4B39B101AE37F5BC2AA5", 00:11:47.152 "uuid": "a3b99047-355a-4b39-b101-ae37f5bc2aa5" 00:11:47.152 } 00:11:47.152 ] 00:11:47.152 }, 00:11:47.152 { 00:11:47.152 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:47.152 "subtype": "NVMe", 00:11:47.152 "listen_addresses": [ 00:11:47.152 { 00:11:47.152 "trtype": "TCP", 00:11:47.152 "adrfam": "IPv4", 00:11:47.152 "traddr": "10.0.0.2", 00:11:47.152 "trsvcid": "4420" 00:11:47.152 } 00:11:47.152 ], 00:11:47.152 "allow_any_host": true, 00:11:47.152 "hosts": [], 00:11:47.152 "serial_number": "SPDK00000000000002", 00:11:47.152 "model_number": "SPDK bdev Controller", 00:11:47.152 "max_namespaces": 32, 00:11:47.152 "min_cntlid": 1, 00:11:47.152 "max_cntlid": 65519, 00:11:47.152 "namespaces": [ 00:11:47.152 { 00:11:47.152 "nsid": 1, 00:11:47.152 "bdev_name": "Null2", 00:11:47.152 "name": "Null2", 00:11:47.152 "nguid": "52E08CFFC30F4426A6C8B6ACC1E68B0D", 00:11:47.152 "uuid": "52e08cff-c30f-4426-a6c8-b6acc1e68b0d" 00:11:47.152 } 00:11:47.152 ] 00:11:47.152 }, 00:11:47.152 { 00:11:47.152 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:47.152 "subtype": "NVMe", 00:11:47.152 "listen_addresses": [ 00:11:47.152 { 00:11:47.152 "trtype": "TCP", 00:11:47.152 "adrfam": "IPv4", 00:11:47.152 "traddr": "10.0.0.2", 00:11:47.152 "trsvcid": "4420" 00:11:47.152 } 00:11:47.152 ], 00:11:47.152 "allow_any_host": true, 00:11:47.152 "hosts": [], 00:11:47.152 "serial_number": "SPDK00000000000003", 00:11:47.152 "model_number": "SPDK bdev Controller", 00:11:47.152 "max_namespaces": 32, 00:11:47.152 "min_cntlid": 1, 00:11:47.152 "max_cntlid": 65519, 00:11:47.152 "namespaces": [ 00:11:47.152 { 00:11:47.152 "nsid": 1, 00:11:47.152 "bdev_name": "Null3", 00:11:47.152 "name": "Null3", 00:11:47.152 "nguid": "E3B62455D584487FAAA9ECCC457B3814", 00:11:47.152 "uuid": "e3b62455-d584-487f-aaa9-eccc457b3814" 00:11:47.152 } 00:11:47.152 ] 00:11:47.152 }, 00:11:47.152 { 00:11:47.152 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:47.152 "subtype": "NVMe", 00:11:47.152 "listen_addresses": [ 00:11:47.152 { 00:11:47.152 "trtype": "TCP", 00:11:47.152 "adrfam": "IPv4", 00:11:47.152 "traddr": "10.0.0.2", 00:11:47.152 "trsvcid": "4420" 00:11:47.152 } 00:11:47.152 ], 00:11:47.152 "allow_any_host": true, 00:11:47.152 "hosts": [], 00:11:47.152 "serial_number": "SPDK00000000000004", 00:11:47.152 "model_number": "SPDK bdev Controller", 00:11:47.152 "max_namespaces": 32, 00:11:47.152 "min_cntlid": 1, 00:11:47.152 "max_cntlid": 65519, 00:11:47.152 "namespaces": [ 00:11:47.152 { 00:11:47.152 "nsid": 1, 00:11:47.152 "bdev_name": "Null4", 00:11:47.152 "name": "Null4", 00:11:47.152 "nguid": "5E7A700FA4624E32AEAE5835B48C3F8C", 00:11:47.152 "uuid": "5e7a700f-a462-4e32-aeae-5835b48c3f8c" 00:11:47.152 } 00:11:47.152 ] 00:11:47.152 } 00:11:47.152 ] 00:11:47.152 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.152 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:47.152 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:47.152 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:47.152 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.152 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.414 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.414 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.414 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:47.414 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:47.414 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:47.415 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:47.415 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:47.415 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:47.415 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:47.415 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:47.415 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:47.415 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:47.415 rmmod nvme_tcp 00:11:47.415 rmmod nvme_fabrics 00:11:47.415 rmmod nvme_keyring 00:11:47.415 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:47.415 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:47.415 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:47.415 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2625580 ']' 00:11:47.415 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2625580 00:11:47.415 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2625580 ']' 00:11:47.415 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2625580 00:11:47.415 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:47.415 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:47.415 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2625580 00:11:47.677 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:47.677 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:47.677 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2625580' 00:11:47.677 killing process with pid 2625580 00:11:47.677 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2625580 00:11:47.677 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2625580 00:11:47.677 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:47.677 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:47.677 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:47.677 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:47.677 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:47.677 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:47.677 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:47.677 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:47.677 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:47.677 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.677 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.677 11:12:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:50.227 00:11:50.227 real 0m11.731s 00:11:50.227 user 0m9.016s 00:11:50.227 sys 0m6.033s 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:50.227 ************************************ 00:11:50.227 END TEST nvmf_target_discovery 00:11:50.227 ************************************ 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:50.227 ************************************ 00:11:50.227 START TEST nvmf_referrals 00:11:50.227 ************************************ 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:50.227 * Looking for test storage... 00:11:50.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:50.227 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:50.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.228 --rc genhtml_branch_coverage=1 00:11:50.228 --rc genhtml_function_coverage=1 00:11:50.228 --rc genhtml_legend=1 00:11:50.228 --rc geninfo_all_blocks=1 00:11:50.228 --rc geninfo_unexecuted_blocks=1 00:11:50.228 00:11:50.228 ' 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:50.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.228 --rc genhtml_branch_coverage=1 00:11:50.228 --rc genhtml_function_coverage=1 00:11:50.228 --rc genhtml_legend=1 00:11:50.228 --rc geninfo_all_blocks=1 00:11:50.228 --rc geninfo_unexecuted_blocks=1 00:11:50.228 00:11:50.228 ' 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:50.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.228 --rc genhtml_branch_coverage=1 00:11:50.228 --rc genhtml_function_coverage=1 00:11:50.228 --rc genhtml_legend=1 00:11:50.228 --rc geninfo_all_blocks=1 00:11:50.228 --rc geninfo_unexecuted_blocks=1 00:11:50.228 00:11:50.228 ' 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:50.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.228 --rc genhtml_branch_coverage=1 00:11:50.228 --rc genhtml_function_coverage=1 00:11:50.228 --rc genhtml_legend=1 00:11:50.228 --rc geninfo_all_blocks=1 00:11:50.228 --rc geninfo_unexecuted_blocks=1 00:11:50.228 00:11:50.228 ' 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:50.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:50.228 11:12:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.369 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:58.369 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:58.369 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:58.369 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:58.369 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:58.369 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:58.369 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:58.369 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:58.369 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:58.369 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:58.369 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:58.369 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:58.369 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:58.369 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:58.369 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:58.369 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:58.369 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:58.369 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:58.369 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:58.369 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:58.369 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:58.369 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:58.369 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:58.369 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:58.369 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:58.369 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:58.369 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:58.369 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:58.369 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:58.369 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:58.369 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:58.369 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:58.369 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:58.369 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:58.369 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:58.369 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:58.370 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:58.370 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:58.370 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:58.370 11:12:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:58.370 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:58.370 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:58.370 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:58.370 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:58.370 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:58.370 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:58.370 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:58.370 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:58.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:58.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:11:58.370 00:11:58.370 --- 10.0.0.2 ping statistics --- 00:11:58.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.370 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:11:58.370 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:58.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:58.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:11:58.370 00:11:58.370 --- 10.0.0.1 ping statistics --- 00:11:58.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.370 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:11:58.370 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:58.370 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:58.370 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:58.370 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:58.370 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:58.370 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:58.370 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:58.370 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:58.370 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:58.370 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:58.370 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:58.370 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:58.370 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.370 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2630274 00:11:58.370 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2630274 00:11:58.370 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2630274 ']' 00:11:58.370 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:58.370 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.370 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:58.370 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.370 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:58.370 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.370 [2024-11-20 11:12:50.382186] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:11:58.370 [2024-11-20 11:12:50.382252] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:58.370 [2024-11-20 11:12:50.483077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:58.370 [2024-11-20 11:12:50.536835] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:58.370 [2024-11-20 11:12:50.536886] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:58.370 [2024-11-20 11:12:50.536894] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:58.370 [2024-11-20 11:12:50.536901] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:58.370 [2024-11-20 11:12:50.536907] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:58.370 [2024-11-20 11:12:50.538965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.370 [2024-11-20 11:12:50.539123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:58.370 [2024-11-20 11:12:50.539290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:58.371 [2024-11-20 11:12:50.539449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.633 [2024-11-20 11:12:51.251912] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.633 [2024-11-20 11:12:51.268314] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.633 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:58.893 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.893 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:58.893 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:58.893 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:58.893 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:58.893 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:58.893 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:58.893 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:58.893 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:58.893 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:58.893 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:58.893 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:58.893 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.893 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.893 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.893 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:58.893 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.893 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.894 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.894 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:58.894 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.894 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.894 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.894 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:58.894 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:58.894 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.894 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.894 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.154 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:59.154 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:59.154 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:59.154 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:59.154 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:59.154 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:59.154 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:59.154 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:59.154 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:59.154 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:59.154 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.154 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.154 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.154 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:59.154 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.154 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.415 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.415 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:59.415 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:59.415 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:59.415 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:59.415 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.415 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:59.415 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.415 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.415 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:59.415 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:59.415 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:59.415 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:59.415 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:59.415 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:59.415 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:59.415 11:12:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:59.415 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:59.415 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:59.415 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:59.415 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:59.415 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:59.415 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:59.415 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:59.675 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:59.675 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:59.675 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:59.675 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:59.675 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:59.675 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:59.934 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:59.934 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:59.934 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.934 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.934 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.934 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:59.934 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:59.934 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:59.934 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.934 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:59.934 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:59.934 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:59.934 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.934 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:59.934 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:59.934 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:59.934 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:59.934 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:59.934 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:59.934 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:59.934 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:00.194 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:00.195 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:00.195 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:00.195 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:00.195 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:00.195 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:00.195 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:00.195 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:00.455 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:00.455 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:00.455 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:00.455 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:00.455 11:12:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:00.455 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:00.455 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:00.455 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.455 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.455 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.455 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:00.455 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:00.455 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.455 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.455 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.455 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:00.455 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:00.455 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:00.455 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:00.455 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:00.455 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:00.455 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:00.715 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:00.715 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:00.715 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:00.716 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:00.716 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:00.716 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:00.716 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:00.716 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:00.716 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:00.716 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:00.716 rmmod nvme_tcp 00:12:00.716 rmmod nvme_fabrics 00:12:00.716 rmmod nvme_keyring 00:12:00.716 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:00.716 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:00.716 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:00.716 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2630274 ']' 00:12:00.716 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2630274 00:12:00.716 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2630274 ']' 00:12:00.716 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2630274 00:12:00.716 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:00.716 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.716 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2630274 00:12:00.976 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:00.976 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:00.976 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2630274' 00:12:00.976 killing process with pid 2630274 00:12:00.976 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2630274 00:12:00.976 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2630274 00:12:00.976 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:00.976 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:00.976 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:00.976 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:00.976 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:00.976 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:00.976 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:00.976 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:00.976 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:00.976 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.976 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.976 11:12:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:03.521 00:12:03.521 real 0m13.200s 00:12:03.521 user 0m15.500s 00:12:03.521 sys 0m6.559s 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:03.521 ************************************ 00:12:03.521 END TEST nvmf_referrals 00:12:03.521 ************************************ 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:03.521 ************************************ 00:12:03.521 START TEST nvmf_connect_disconnect 00:12:03.521 ************************************ 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:03.521 * Looking for test storage... 00:12:03.521 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:03.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.521 --rc genhtml_branch_coverage=1 00:12:03.521 --rc genhtml_function_coverage=1 00:12:03.521 --rc genhtml_legend=1 00:12:03.521 --rc geninfo_all_blocks=1 00:12:03.521 --rc geninfo_unexecuted_blocks=1 00:12:03.521 00:12:03.521 ' 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:03.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.521 --rc genhtml_branch_coverage=1 00:12:03.521 --rc genhtml_function_coverage=1 00:12:03.521 --rc genhtml_legend=1 00:12:03.521 --rc geninfo_all_blocks=1 00:12:03.521 --rc geninfo_unexecuted_blocks=1 00:12:03.521 00:12:03.521 ' 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:03.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.521 --rc genhtml_branch_coverage=1 00:12:03.521 --rc genhtml_function_coverage=1 00:12:03.521 --rc genhtml_legend=1 00:12:03.521 --rc geninfo_all_blocks=1 00:12:03.521 --rc geninfo_unexecuted_blocks=1 00:12:03.521 00:12:03.521 ' 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:03.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.521 --rc genhtml_branch_coverage=1 00:12:03.521 --rc genhtml_function_coverage=1 00:12:03.521 --rc genhtml_legend=1 00:12:03.521 --rc geninfo_all_blocks=1 00:12:03.521 --rc geninfo_unexecuted_blocks=1 00:12:03.521 00:12:03.521 ' 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:03.521 11:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:03.521 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:03.521 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:03.521 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:03.521 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:03.521 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:03.521 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:03.521 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:03.521 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:03.521 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:03.521 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:03.521 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:03.521 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:03.521 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:03.521 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:03.521 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:03.521 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.521 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.521 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.521 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.521 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.521 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.521 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:03.522 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.522 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:03.522 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:03.522 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:03.522 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:03.522 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:03.522 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:03.522 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:03.522 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:03.522 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:03.522 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:03.522 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:03.522 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:03.522 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:03.522 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:03.522 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:03.522 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:03.522 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:03.522 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:03.522 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:03.522 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.522 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.522 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.522 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:03.522 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:03.522 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:03.522 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:11.663 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:11.663 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:11.663 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:11.664 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:11.664 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:11.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:11.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.499 ms 00:12:11.664 00:12:11.664 --- 10.0.0.2 ping statistics --- 00:12:11.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.664 rtt min/avg/max/mdev = 0.499/0.499/0.499/0.000 ms 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:11.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:11.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:12:11.664 00:12:11.664 --- 10.0.0.1 ping statistics --- 00:12:11.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.664 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2635057 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2635057 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2635057 ']' 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:11.664 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.664 [2024-11-20 11:13:03.617511] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:12:11.664 [2024-11-20 11:13:03.617586] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.664 [2024-11-20 11:13:03.718109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:11.664 [2024-11-20 11:13:03.771837] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.664 [2024-11-20 11:13:03.771887] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.664 [2024-11-20 11:13:03.771895] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:11.664 [2024-11-20 11:13:03.771902] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:11.664 [2024-11-20 11:13:03.771909] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.664 [2024-11-20 11:13:03.774427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.664 [2024-11-20 11:13:03.774589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.664 [2024-11-20 11:13:03.774751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.664 [2024-11-20 11:13:03.774751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:11.925 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:11.925 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:11.925 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:11.925 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:11.925 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.925 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:11.925 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:11.925 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.925 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.925 [2024-11-20 11:13:04.491389] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:11.925 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.925 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:11.925 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.925 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.925 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.925 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:11.925 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:11.925 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.925 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.925 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.925 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:11.925 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.925 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.925 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.925 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:11.925 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.925 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:11.925 [2024-11-20 11:13:04.570751] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:11.925 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.925 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:11.925 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:11.925 11:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:16.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.379 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:30.379 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:30.379 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:30.379 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:30.379 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:30.380 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:30.380 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:30.380 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:30.380 rmmod nvme_tcp 00:12:30.380 rmmod nvme_fabrics 00:12:30.380 rmmod nvme_keyring 00:12:30.380 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:30.380 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:30.380 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:30.380 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2635057 ']' 00:12:30.380 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2635057 00:12:30.380 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2635057 ']' 00:12:30.380 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2635057 00:12:30.380 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:12:30.380 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:30.380 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2635057 00:12:30.380 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:30.380 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:30.380 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2635057' 00:12:30.380 killing process with pid 2635057 00:12:30.380 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2635057 00:12:30.380 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2635057 00:12:30.641 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:30.641 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:30.641 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:30.641 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:30.641 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:30.641 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:30.641 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:30.641 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:30.641 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:30.641 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.641 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:30.641 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.554 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:32.554 00:12:32.554 real 0m29.454s 00:12:32.554 user 1m19.387s 00:12:32.554 sys 0m7.164s 00:12:32.554 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.554 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:32.554 ************************************ 00:12:32.554 END TEST nvmf_connect_disconnect 00:12:32.554 ************************************ 00:12:32.554 11:13:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:32.554 11:13:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:32.554 11:13:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.554 11:13:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:32.815 ************************************ 00:12:32.815 START TEST nvmf_multitarget 00:12:32.815 ************************************ 00:12:32.815 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:32.815 * Looking for test storage... 00:12:32.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:32.815 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:32.815 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:12:32.815 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:32.815 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:32.815 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:32.815 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:32.815 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:32.815 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:32.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.816 --rc genhtml_branch_coverage=1 00:12:32.816 --rc genhtml_function_coverage=1 00:12:32.816 --rc genhtml_legend=1 00:12:32.816 --rc geninfo_all_blocks=1 00:12:32.816 --rc geninfo_unexecuted_blocks=1 00:12:32.816 00:12:32.816 ' 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:32.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.816 --rc genhtml_branch_coverage=1 00:12:32.816 --rc genhtml_function_coverage=1 00:12:32.816 --rc genhtml_legend=1 00:12:32.816 --rc geninfo_all_blocks=1 00:12:32.816 --rc geninfo_unexecuted_blocks=1 00:12:32.816 00:12:32.816 ' 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:32.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.816 --rc genhtml_branch_coverage=1 00:12:32.816 --rc genhtml_function_coverage=1 00:12:32.816 --rc genhtml_legend=1 00:12:32.816 --rc geninfo_all_blocks=1 00:12:32.816 --rc geninfo_unexecuted_blocks=1 00:12:32.816 00:12:32.816 ' 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:32.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.816 --rc genhtml_branch_coverage=1 00:12:32.816 --rc genhtml_function_coverage=1 00:12:32.816 --rc genhtml_legend=1 00:12:32.816 --rc geninfo_all_blocks=1 00:12:32.816 --rc geninfo_unexecuted_blocks=1 00:12:32.816 00:12:32.816 ' 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:32.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:32.816 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:33.077 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:33.077 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:33.077 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:33.077 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:33.077 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:33.077 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:33.077 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.077 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:33.077 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.077 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:33.078 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:33.078 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:33.078 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:41.223 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:41.223 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:41.223 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:41.223 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:41.223 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:41.224 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:41.224 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:41.224 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:41.224 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:41.224 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:41.224 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:41.224 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:41.224 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:41.224 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:41.224 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:41.224 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:41.224 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:41.224 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:41.224 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:41.224 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:41.224 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:41.224 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:41.224 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:41.224 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:41.224 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:41.224 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:41.224 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:41.224 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:41.224 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:41.224 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:41.224 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:41.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:41.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:12:41.224 00:12:41.224 --- 10.0.0.2 ping statistics --- 00:12:41.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.224 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:12:41.224 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:41.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:41.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:12:41.224 00:12:41.224 --- 10.0.0.1 ping statistics --- 00:12:41.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.224 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:12:41.224 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:41.224 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:41.224 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:41.224 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:41.224 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:41.224 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:41.224 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:41.224 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:41.224 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:41.224 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:41.224 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:41.224 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:41.224 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:41.224 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2643196 00:12:41.224 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2643196 00:12:41.224 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:41.224 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2643196 ']' 00:12:41.224 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.224 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:41.224 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.224 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:41.224 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:41.224 [2024-11-20 11:13:33.090205] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:12:41.224 [2024-11-20 11:13:33.090270] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.224 [2024-11-20 11:13:33.194363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:41.224 [2024-11-20 11:13:33.247461] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:41.224 [2024-11-20 11:13:33.247516] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:41.224 [2024-11-20 11:13:33.247525] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:41.224 [2024-11-20 11:13:33.247531] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:41.224 [2024-11-20 11:13:33.247538] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:41.224 [2024-11-20 11:13:33.249554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.224 [2024-11-20 11:13:33.249715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:41.224 [2024-11-20 11:13:33.249843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.224 [2024-11-20 11:13:33.249844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:41.224 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:41.224 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:41.224 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:41.224 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:41.224 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:41.485 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:41.485 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:41.485 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:41.485 11:13:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:41.485 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:41.485 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:41.485 "nvmf_tgt_1" 00:12:41.485 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:41.747 "nvmf_tgt_2" 00:12:41.747 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:41.747 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:41.747 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:41.747 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:42.007 true 00:12:42.007 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:42.007 true 00:12:42.007 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:42.007 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:42.268 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:42.268 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:42.268 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:42.268 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:42.268 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:42.268 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:42.268 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:42.268 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:42.268 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:42.268 rmmod nvme_tcp 00:12:42.268 rmmod nvme_fabrics 00:12:42.268 rmmod nvme_keyring 00:12:42.268 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:42.268 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:42.268 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:42.268 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2643196 ']' 00:12:42.268 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2643196 00:12:42.268 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2643196 ']' 00:12:42.268 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2643196 00:12:42.268 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:42.268 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:42.268 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2643196 00:12:42.268 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:42.268 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:42.268 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2643196' 00:12:42.268 killing process with pid 2643196 00:12:42.268 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2643196 00:12:42.268 11:13:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2643196 00:12:42.532 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:42.532 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:42.532 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:42.532 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:42.532 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:42.532 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:42.532 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:42.532 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:42.532 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:42.532 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.532 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:42.532 11:13:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.449 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:44.449 00:12:44.449 real 0m11.815s 00:12:44.449 user 0m10.269s 00:12:44.449 sys 0m6.115s 00:12:44.449 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:44.449 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:44.449 ************************************ 00:12:44.449 END TEST nvmf_multitarget 00:12:44.449 ************************************ 00:12:44.449 11:13:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:44.449 11:13:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:44.449 11:13:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:44.449 11:13:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:44.711 ************************************ 00:12:44.711 START TEST nvmf_rpc 00:12:44.711 ************************************ 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:44.711 * Looking for test storage... 00:12:44.711 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:44.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.711 --rc genhtml_branch_coverage=1 00:12:44.711 --rc genhtml_function_coverage=1 00:12:44.711 --rc genhtml_legend=1 00:12:44.711 --rc geninfo_all_blocks=1 00:12:44.711 --rc geninfo_unexecuted_blocks=1 00:12:44.711 00:12:44.711 ' 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:44.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.711 --rc genhtml_branch_coverage=1 00:12:44.711 --rc genhtml_function_coverage=1 00:12:44.711 --rc genhtml_legend=1 00:12:44.711 --rc geninfo_all_blocks=1 00:12:44.711 --rc geninfo_unexecuted_blocks=1 00:12:44.711 00:12:44.711 ' 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:44.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.711 --rc genhtml_branch_coverage=1 00:12:44.711 --rc genhtml_function_coverage=1 00:12:44.711 --rc genhtml_legend=1 00:12:44.711 --rc geninfo_all_blocks=1 00:12:44.711 --rc geninfo_unexecuted_blocks=1 00:12:44.711 00:12:44.711 ' 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:44.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:44.711 --rc genhtml_branch_coverage=1 00:12:44.711 --rc genhtml_function_coverage=1 00:12:44.711 --rc genhtml_legend=1 00:12:44.711 --rc geninfo_all_blocks=1 00:12:44.711 --rc geninfo_unexecuted_blocks=1 00:12:44.711 00:12:44.711 ' 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:44.711 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:44.973 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:44.973 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:44.973 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:44.973 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:44.973 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:44.973 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:44.973 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:44.973 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:44.973 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:44.973 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:44.973 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:44.973 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.974 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.974 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.974 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:44.974 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.974 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:44.974 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:44.974 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:44.974 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:44.974 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:44.974 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:44.974 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:44.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:44.974 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:44.974 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:44.974 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:44.974 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:44.974 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:44.974 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:44.974 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:44.974 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:44.974 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:44.974 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:44.974 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.974 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:44.974 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.974 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:44.974 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:44.974 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:44.974 11:13:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:53.118 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:53.118 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:53.118 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:53.118 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:53.118 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:53.119 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:53.119 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:53.119 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:53.119 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:53.119 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:53.119 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:53.119 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:53.119 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:53.119 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:53.119 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:53.119 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:53.119 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:53.119 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:53.119 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:53.119 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:53.119 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:53.119 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:53.119 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:53.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:53.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:12:53.119 00:12:53.119 --- 10.0.0.2 ping statistics --- 00:12:53.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.119 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:12:53.119 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:53.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:53.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:12:53.119 00:12:53.119 --- 10.0.0.1 ping statistics --- 00:12:53.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.119 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:12:53.119 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:53.119 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:53.119 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:53.119 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:53.119 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:53.119 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:53.119 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:53.119 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:53.119 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:53.119 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:53.119 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:53.119 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:53.119 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.119 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2647882 00:12:53.119 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2647882 00:12:53.119 11:13:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:53.119 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2647882 ']' 00:12:53.119 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.119 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:53.119 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.119 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:53.119 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.119 [2024-11-20 11:13:45.058285] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:12:53.119 [2024-11-20 11:13:45.058351] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:53.119 [2024-11-20 11:13:45.160153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:53.119 [2024-11-20 11:13:45.213201] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:53.119 [2024-11-20 11:13:45.213257] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:53.119 [2024-11-20 11:13:45.213266] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:53.119 [2024-11-20 11:13:45.213273] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:53.119 [2024-11-20 11:13:45.213279] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:53.119 [2024-11-20 11:13:45.215258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.119 [2024-11-20 11:13:45.215394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:53.119 [2024-11-20 11:13:45.215554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.119 [2024-11-20 11:13:45.215555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:53.380 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:53.380 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:53.380 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:53.380 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:53.380 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.380 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:53.380 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:53.380 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.380 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.380 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.380 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:53.380 "tick_rate": 2400000000, 00:12:53.380 "poll_groups": [ 00:12:53.380 { 00:12:53.380 "name": "nvmf_tgt_poll_group_000", 00:12:53.380 "admin_qpairs": 0, 00:12:53.380 "io_qpairs": 0, 00:12:53.380 "current_admin_qpairs": 0, 00:12:53.380 "current_io_qpairs": 0, 00:12:53.380 "pending_bdev_io": 0, 00:12:53.380 "completed_nvme_io": 0, 00:12:53.380 "transports": [] 00:12:53.380 }, 00:12:53.380 { 00:12:53.380 "name": "nvmf_tgt_poll_group_001", 00:12:53.380 "admin_qpairs": 0, 00:12:53.380 "io_qpairs": 0, 00:12:53.380 "current_admin_qpairs": 0, 00:12:53.380 "current_io_qpairs": 0, 00:12:53.380 "pending_bdev_io": 0, 00:12:53.380 "completed_nvme_io": 0, 00:12:53.380 "transports": [] 00:12:53.380 }, 00:12:53.380 { 00:12:53.380 "name": "nvmf_tgt_poll_group_002", 00:12:53.380 "admin_qpairs": 0, 00:12:53.380 "io_qpairs": 0, 00:12:53.380 "current_admin_qpairs": 0, 00:12:53.380 "current_io_qpairs": 0, 00:12:53.380 "pending_bdev_io": 0, 00:12:53.380 "completed_nvme_io": 0, 00:12:53.380 "transports": [] 00:12:53.380 }, 00:12:53.380 { 00:12:53.380 "name": "nvmf_tgt_poll_group_003", 00:12:53.380 "admin_qpairs": 0, 00:12:53.380 "io_qpairs": 0, 00:12:53.380 "current_admin_qpairs": 0, 00:12:53.380 "current_io_qpairs": 0, 00:12:53.380 "pending_bdev_io": 0, 00:12:53.380 "completed_nvme_io": 0, 00:12:53.380 "transports": [] 00:12:53.380 } 00:12:53.380 ] 00:12:53.380 }' 00:12:53.380 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:53.380 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:53.380 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:53.380 11:13:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:53.380 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:53.380 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:53.380 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:53.380 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:53.380 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.380 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.380 [2024-11-20 11:13:46.060120] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:53.380 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.380 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:53.380 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.380 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.380 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.380 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:53.380 "tick_rate": 2400000000, 00:12:53.380 "poll_groups": [ 00:12:53.380 { 00:12:53.380 "name": "nvmf_tgt_poll_group_000", 00:12:53.380 "admin_qpairs": 0, 00:12:53.380 "io_qpairs": 0, 00:12:53.380 "current_admin_qpairs": 0, 00:12:53.380 "current_io_qpairs": 0, 00:12:53.380 "pending_bdev_io": 0, 00:12:53.380 "completed_nvme_io": 0, 00:12:53.380 "transports": [ 00:12:53.380 { 00:12:53.380 "trtype": "TCP" 00:12:53.380 } 00:12:53.380 ] 00:12:53.380 }, 00:12:53.380 { 00:12:53.380 "name": "nvmf_tgt_poll_group_001", 00:12:53.380 "admin_qpairs": 0, 00:12:53.380 "io_qpairs": 0, 00:12:53.380 "current_admin_qpairs": 0, 00:12:53.380 "current_io_qpairs": 0, 00:12:53.380 "pending_bdev_io": 0, 00:12:53.380 "completed_nvme_io": 0, 00:12:53.380 "transports": [ 00:12:53.380 { 00:12:53.380 "trtype": "TCP" 00:12:53.380 } 00:12:53.380 ] 00:12:53.380 }, 00:12:53.380 { 00:12:53.380 "name": "nvmf_tgt_poll_group_002", 00:12:53.380 "admin_qpairs": 0, 00:12:53.380 "io_qpairs": 0, 00:12:53.380 "current_admin_qpairs": 0, 00:12:53.380 "current_io_qpairs": 0, 00:12:53.380 "pending_bdev_io": 0, 00:12:53.380 "completed_nvme_io": 0, 00:12:53.380 "transports": [ 00:12:53.380 { 00:12:53.380 "trtype": "TCP" 00:12:53.380 } 00:12:53.380 ] 00:12:53.380 }, 00:12:53.380 { 00:12:53.380 "name": "nvmf_tgt_poll_group_003", 00:12:53.380 "admin_qpairs": 0, 00:12:53.380 "io_qpairs": 0, 00:12:53.380 "current_admin_qpairs": 0, 00:12:53.380 "current_io_qpairs": 0, 00:12:53.380 "pending_bdev_io": 0, 00:12:53.380 "completed_nvme_io": 0, 00:12:53.380 "transports": [ 00:12:53.380 { 00:12:53.380 "trtype": "TCP" 00:12:53.380 } 00:12:53.380 ] 00:12:53.380 } 00:12:53.380 ] 00:12:53.380 }' 00:12:53.380 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:53.380 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:53.380 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:53.380 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:53.642 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:53.642 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:53.642 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:53.642 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:53.642 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:53.642 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:53.642 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:53.642 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:53.642 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:53.642 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:53.642 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.642 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.642 Malloc1 00:12:53.642 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.642 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:53.642 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.642 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.642 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.642 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:53.642 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.642 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.642 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.642 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:53.642 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.642 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.642 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.642 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.642 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.642 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.642 [2024-11-20 11:13:46.268657] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.642 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.643 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:53.643 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:53.643 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:53.643 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:53.643 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:53.643 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:53.643 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:53.643 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:53.643 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:53.643 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:53.643 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:53.643 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:53.643 [2024-11-20 11:13:46.305802] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:53.643 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:53.643 could not add new controller: failed to write to nvme-fabrics device 00:12:53.643 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:53.643 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:53.643 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:53.643 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:53.643 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:53.643 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.643 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.643 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.643 11:13:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.555 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:55.555 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:55.555 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:55.555 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:55.555 11:13:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:57.464 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:57.464 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:57.464 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:57.464 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:57.464 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:57.464 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:57.464 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:57.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.464 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:57.464 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:57.464 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:57.464 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.464 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:57.464 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.464 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:57.464 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:57.465 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.465 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.465 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.465 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:57.465 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:57.465 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:57.465 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:57.465 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:57.465 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:57.465 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:57.465 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:57.465 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:57.465 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:57.465 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:57.465 11:13:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:57.465 [2024-11-20 11:13:50.019168] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:57.465 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:57.465 could not add new controller: failed to write to nvme-fabrics device 00:12:57.465 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:57.465 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:57.465 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:57.465 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:57.465 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:57.465 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.465 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.465 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.465 11:13:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:59.374 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:59.374 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:59.374 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:59.374 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:59.374 11:13:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:01.285 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:01.285 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:01.285 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:01.285 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:01.285 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.285 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:01.285 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.285 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.285 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:01.285 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:01.285 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:01.285 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.285 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:01.285 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.285 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:01.285 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.285 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.285 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.285 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.285 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:01.285 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:01.286 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:01.286 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.286 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.286 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.286 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.286 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.286 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.286 [2024-11-20 11:13:53.782913] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.286 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.286 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:01.286 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.286 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.286 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.286 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:01.286 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.286 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.286 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.286 11:13:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:02.671 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:02.671 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:02.671 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:02.671 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:02.671 11:13:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:05.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.214 [2024-11-20 11:13:57.645137] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.214 11:13:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.598 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:06.598 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:06.598 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:06.598 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:06.598 11:13:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:08.626 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:08.626 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:08.626 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:08.626 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:08.626 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:08.626 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:08.626 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:08.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.626 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:08.626 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:08.626 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:08.626 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.626 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:08.626 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.626 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:08.626 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:08.626 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.626 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.626 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.626 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.626 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.626 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.626 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.626 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:08.626 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.626 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.626 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.626 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.626 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.626 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.626 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.886 [2024-11-20 11:14:01.366151] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.886 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.886 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:08.886 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.886 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.886 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.886 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.886 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.886 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.886 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.886 11:14:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:10.269 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:10.269 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:10.269 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:10.269 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:10.269 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:12.812 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:12.812 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:12.812 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:12.812 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:12.812 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:12.812 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:12.812 11:14:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:12.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.812 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:12.812 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:12.812 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:12.812 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.812 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:12.812 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.812 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:12.812 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:12.812 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.812 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.812 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.812 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:12.812 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.812 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.812 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.812 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:12.812 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:12.812 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.812 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.812 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.812 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.812 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.812 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.812 [2024-11-20 11:14:05.175289] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.812 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.812 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:12.812 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.812 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.812 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.812 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:12.812 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.812 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.812 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.812 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:14.197 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:14.197 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:14.197 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:14.197 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:14.197 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:16.109 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:16.109 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:16.109 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:16.109 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:16.109 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:16.109 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:16.109 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:16.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.109 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:16.109 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:16.109 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:16.109 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:16.109 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:16.109 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:16.369 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:16.369 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:16.369 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.369 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.369 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.369 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:16.369 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.369 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.369 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.369 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:16.369 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:16.369 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.369 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.369 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.369 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:16.369 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.369 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.369 [2024-11-20 11:14:08.898150] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:16.369 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.369 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:16.369 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.369 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.369 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.369 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:16.369 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.369 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.369 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.369 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:17.757 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:17.757 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:17.757 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:17.757 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:17.757 11:14:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:19.669 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:19.669 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:19.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.929 [2024-11-20 11:14:12.618330] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.929 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.930 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:19.930 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.930 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.190 [2024-11-20 11:14:12.690513] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.190 [2024-11-20 11:14:12.762730] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.190 [2024-11-20 11:14:12.834942] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:20.190 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:20.191 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.191 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.191 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.191 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.191 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.191 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.191 [2024-11-20 11:14:12.903136] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.191 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.191 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:20.191 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.191 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.191 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.191 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:20.191 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.191 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.450 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.450 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.450 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.450 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.450 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.451 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:20.451 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.451 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.451 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.451 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:20.451 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.451 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.451 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.451 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:20.451 "tick_rate": 2400000000, 00:13:20.451 "poll_groups": [ 00:13:20.451 { 00:13:20.451 "name": "nvmf_tgt_poll_group_000", 00:13:20.451 "admin_qpairs": 0, 00:13:20.451 "io_qpairs": 224, 00:13:20.451 "current_admin_qpairs": 0, 00:13:20.451 "current_io_qpairs": 0, 00:13:20.451 "pending_bdev_io": 0, 00:13:20.451 "completed_nvme_io": 226, 00:13:20.451 "transports": [ 00:13:20.451 { 00:13:20.451 "trtype": "TCP" 00:13:20.451 } 00:13:20.451 ] 00:13:20.451 }, 00:13:20.451 { 00:13:20.451 "name": "nvmf_tgt_poll_group_001", 00:13:20.451 "admin_qpairs": 1, 00:13:20.451 "io_qpairs": 223, 00:13:20.451 "current_admin_qpairs": 0, 00:13:20.451 "current_io_qpairs": 0, 00:13:20.451 "pending_bdev_io": 0, 00:13:20.451 "completed_nvme_io": 273, 00:13:20.451 "transports": [ 00:13:20.451 { 00:13:20.451 "trtype": "TCP" 00:13:20.451 } 00:13:20.451 ] 00:13:20.451 }, 00:13:20.451 { 00:13:20.451 "name": "nvmf_tgt_poll_group_002", 00:13:20.451 "admin_qpairs": 6, 00:13:20.451 "io_qpairs": 218, 00:13:20.451 "current_admin_qpairs": 0, 00:13:20.451 "current_io_qpairs": 0, 00:13:20.451 "pending_bdev_io": 0, 00:13:20.451 "completed_nvme_io": 514, 00:13:20.451 "transports": [ 00:13:20.451 { 00:13:20.451 "trtype": "TCP" 00:13:20.451 } 00:13:20.451 ] 00:13:20.451 }, 00:13:20.451 { 00:13:20.451 "name": "nvmf_tgt_poll_group_003", 00:13:20.451 "admin_qpairs": 0, 00:13:20.451 "io_qpairs": 224, 00:13:20.451 "current_admin_qpairs": 0, 00:13:20.451 "current_io_qpairs": 0, 00:13:20.451 "pending_bdev_io": 0, 00:13:20.451 "completed_nvme_io": 226, 00:13:20.451 "transports": [ 00:13:20.451 { 00:13:20.451 "trtype": "TCP" 00:13:20.451 } 00:13:20.451 ] 00:13:20.451 } 00:13:20.451 ] 00:13:20.451 }' 00:13:20.451 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:20.451 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:20.451 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:20.451 11:14:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:20.451 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:20.451 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:20.451 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:20.451 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:20.451 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:20.451 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:20.451 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:20.451 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:20.451 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:20.451 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:20.451 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:20.451 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:20.451 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:20.451 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:20.451 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:20.451 rmmod nvme_tcp 00:13:20.451 rmmod nvme_fabrics 00:13:20.451 rmmod nvme_keyring 00:13:20.451 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:20.451 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:20.451 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:20.451 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2647882 ']' 00:13:20.451 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2647882 00:13:20.451 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2647882 ']' 00:13:20.451 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2647882 00:13:20.451 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:13:20.451 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:20.451 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2647882 00:13:20.711 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:20.711 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:20.711 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2647882' 00:13:20.711 killing process with pid 2647882 00:13:20.711 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2647882 00:13:20.711 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2647882 00:13:20.711 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:20.711 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:20.711 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:20.711 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:20.711 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:20.711 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:20.711 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:20.711 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:20.711 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:20.711 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.711 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:20.711 11:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.255 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:23.255 00:13:23.255 real 0m38.190s 00:13:23.255 user 1m54.447s 00:13:23.255 sys 0m7.944s 00:13:23.255 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:23.255 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.255 ************************************ 00:13:23.255 END TEST nvmf_rpc 00:13:23.255 ************************************ 00:13:23.255 11:14:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:23.255 11:14:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:23.255 11:14:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:23.255 11:14:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:23.255 ************************************ 00:13:23.255 START TEST nvmf_invalid 00:13:23.255 ************************************ 00:13:23.255 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:23.255 * Looking for test storage... 00:13:23.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:23.255 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:23.255 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:13:23.255 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:23.255 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:23.255 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:23.255 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:23.255 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:23.255 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:23.255 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:23.255 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:23.255 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:23.255 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:23.255 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:23.255 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:23.255 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:23.255 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:23.255 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:23.255 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:23.255 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:23.255 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:23.255 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:23.255 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:23.255 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:23.255 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:23.255 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:23.255 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:23.255 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:23.255 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:23.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.256 --rc genhtml_branch_coverage=1 00:13:23.256 --rc genhtml_function_coverage=1 00:13:23.256 --rc genhtml_legend=1 00:13:23.256 --rc geninfo_all_blocks=1 00:13:23.256 --rc geninfo_unexecuted_blocks=1 00:13:23.256 00:13:23.256 ' 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:23.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.256 --rc genhtml_branch_coverage=1 00:13:23.256 --rc genhtml_function_coverage=1 00:13:23.256 --rc genhtml_legend=1 00:13:23.256 --rc geninfo_all_blocks=1 00:13:23.256 --rc geninfo_unexecuted_blocks=1 00:13:23.256 00:13:23.256 ' 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:23.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.256 --rc genhtml_branch_coverage=1 00:13:23.256 --rc genhtml_function_coverage=1 00:13:23.256 --rc genhtml_legend=1 00:13:23.256 --rc geninfo_all_blocks=1 00:13:23.256 --rc geninfo_unexecuted_blocks=1 00:13:23.256 00:13:23.256 ' 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:23.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.256 --rc genhtml_branch_coverage=1 00:13:23.256 --rc genhtml_function_coverage=1 00:13:23.256 --rc genhtml_legend=1 00:13:23.256 --rc geninfo_all_blocks=1 00:13:23.256 --rc geninfo_unexecuted_blocks=1 00:13:23.256 00:13:23.256 ' 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:23.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:23.256 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:31.405 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:31.405 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:31.405 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:31.405 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:31.405 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:31.405 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:31.405 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:31.405 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:31.405 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:31.405 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:31.405 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:31.405 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:31.405 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:31.405 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:31.405 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:31.405 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:31.405 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:31.405 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:31.405 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:31.405 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:31.405 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:31.406 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:31.406 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:31.406 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:31.406 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:31.406 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:31.406 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:31.406 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:31.406 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:31.406 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:31.406 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:31.406 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:31.407 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:31.407 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:31.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:31.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:13:31.407 00:13:31.407 --- 10.0.0.2 ping statistics --- 00:13:31.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.407 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:13:31.407 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:31.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:31.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:13:31.407 00:13:31.407 --- 10.0.0.1 ping statistics --- 00:13:31.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.407 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:13:31.407 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:31.407 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:31.407 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:31.407 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:31.407 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:31.407 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:31.407 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:31.407 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:31.407 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:31.407 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:31.407 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:31.407 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:31.407 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:31.407 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2657705 00:13:31.407 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2657705 00:13:31.407 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:31.407 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2657705 ']' 00:13:31.407 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.407 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:31.407 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.407 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:31.407 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:31.407 [2024-11-20 11:14:23.328102] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:13:31.407 [2024-11-20 11:14:23.328183] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.407 [2024-11-20 11:14:23.430110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:31.407 [2024-11-20 11:14:23.482567] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:31.407 [2024-11-20 11:14:23.482620] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:31.407 [2024-11-20 11:14:23.482629] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:31.407 [2024-11-20 11:14:23.482641] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:31.407 [2024-11-20 11:14:23.482648] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:31.407 [2024-11-20 11:14:23.484736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.407 [2024-11-20 11:14:23.484895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:31.407 [2024-11-20 11:14:23.485057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.407 [2024-11-20 11:14:23.485057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:31.668 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:31.668 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:13:31.668 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:31.668 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:31.668 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:31.668 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:31.668 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:31.668 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode3100 00:13:31.668 [2024-11-20 11:14:24.365850] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:31.668 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:31.668 { 00:13:31.668 "nqn": "nqn.2016-06.io.spdk:cnode3100", 00:13:31.668 "tgt_name": "foobar", 00:13:31.668 "method": "nvmf_create_subsystem", 00:13:31.668 "req_id": 1 00:13:31.668 } 00:13:31.668 Got JSON-RPC error response 00:13:31.668 response: 00:13:31.668 { 00:13:31.668 "code": -32603, 00:13:31.668 "message": "Unable to find target foobar" 00:13:31.668 }' 00:13:31.668 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:31.668 { 00:13:31.668 "nqn": "nqn.2016-06.io.spdk:cnode3100", 00:13:31.668 "tgt_name": "foobar", 00:13:31.668 "method": "nvmf_create_subsystem", 00:13:31.668 "req_id": 1 00:13:31.668 } 00:13:31.668 Got JSON-RPC error response 00:13:31.668 response: 00:13:31.668 { 00:13:31.668 "code": -32603, 00:13:31.668 "message": "Unable to find target foobar" 00:13:31.668 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:31.931 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:31.931 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode29071 00:13:31.931 [2024-11-20 11:14:24.574706] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29071: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:31.931 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:31.931 { 00:13:31.931 "nqn": "nqn.2016-06.io.spdk:cnode29071", 00:13:31.931 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:31.931 "method": "nvmf_create_subsystem", 00:13:31.931 "req_id": 1 00:13:31.931 } 00:13:31.931 Got JSON-RPC error response 00:13:31.931 response: 00:13:31.931 { 00:13:31.931 "code": -32602, 00:13:31.931 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:31.931 }' 00:13:31.931 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:31.931 { 00:13:31.931 "nqn": "nqn.2016-06.io.spdk:cnode29071", 00:13:31.931 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:31.931 "method": "nvmf_create_subsystem", 00:13:31.931 "req_id": 1 00:13:31.931 } 00:13:31.931 Got JSON-RPC error response 00:13:31.931 response: 00:13:31.931 { 00:13:31.931 "code": -32602, 00:13:31.931 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:31.931 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:31.931 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:31.931 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode14181 00:13:32.193 [2024-11-20 11:14:24.779485] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14181: invalid model number 'SPDK_Controller' 00:13:32.193 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:32.193 { 00:13:32.193 "nqn": "nqn.2016-06.io.spdk:cnode14181", 00:13:32.193 "model_number": "SPDK_Controller\u001f", 00:13:32.193 "method": "nvmf_create_subsystem", 00:13:32.193 "req_id": 1 00:13:32.193 } 00:13:32.193 Got JSON-RPC error response 00:13:32.193 response: 00:13:32.193 { 00:13:32.194 "code": -32602, 00:13:32.194 "message": "Invalid MN SPDK_Controller\u001f" 00:13:32.194 }' 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:32.194 { 00:13:32.194 "nqn": "nqn.2016-06.io.spdk:cnode14181", 00:13:32.194 "model_number": "SPDK_Controller\u001f", 00:13:32.194 "method": "nvmf_create_subsystem", 00:13:32.194 "req_id": 1 00:13:32.194 } 00:13:32.194 Got JSON-RPC error response 00:13:32.194 response: 00:13:32.194 { 00:13:32.194 "code": -32602, 00:13:32.194 "message": "Invalid MN SPDK_Controller\u001f" 00:13:32.194 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.194 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ v == \- ]] 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'v4vk&>jb'\''2grg)a(t`Fqa' 00:13:32.456 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'v4vk&>jb'\''2grg)a(t`Fqa' nqn.2016-06.io.spdk:cnode9603 00:13:32.456 [2024-11-20 11:14:25.164993] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9603: invalid serial number 'v4vk&>jb'2grg)a(t`Fqa' 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:32.721 { 00:13:32.721 "nqn": "nqn.2016-06.io.spdk:cnode9603", 00:13:32.721 "serial_number": "v4vk&>jb'\''2grg)a(t`Fqa", 00:13:32.721 "method": "nvmf_create_subsystem", 00:13:32.721 "req_id": 1 00:13:32.721 } 00:13:32.721 Got JSON-RPC error response 00:13:32.721 response: 00:13:32.721 { 00:13:32.721 "code": -32602, 00:13:32.721 "message": "Invalid SN v4vk&>jb'\''2grg)a(t`Fqa" 00:13:32.721 }' 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:32.721 { 00:13:32.721 "nqn": "nqn.2016-06.io.spdk:cnode9603", 00:13:32.721 "serial_number": "v4vk&>jb'2grg)a(t`Fqa", 00:13:32.721 "method": "nvmf_create_subsystem", 00:13:32.721 "req_id": 1 00:13:32.721 } 00:13:32.721 Got JSON-RPC error response 00:13:32.721 response: 00:13:32.721 { 00:13:32.721 "code": -32602, 00:13:32.721 "message": "Invalid SN v4vk&>jb'2grg)a(t`Fqa" 00:13:32.721 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.721 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:32.722 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:32.984 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:32.984 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.984 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.984 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:32.984 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:32.984 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:32.984 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.984 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.984 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:32.984 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:32.984 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:32.984 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.984 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.984 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:32.984 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:32.984 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:32.984 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.984 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.984 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:32.984 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:32.984 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:32.984 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.984 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.984 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:32.984 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:32.984 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:32.984 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.984 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.984 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:32.984 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:32.984 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:32.984 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.984 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.984 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:32.984 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:32.985 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:32.985 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.985 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.985 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:32.985 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:32.985 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:32.985 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.985 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.985 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:32.985 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:32.985 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:32.985 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.985 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.985 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:32.985 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:32.985 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:32.985 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:32.985 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:32.985 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ % == \- ]] 00:13:32.985 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '%e3PUbLQH$l6},3aV_Z"0F^mDPV"hX`xH:L?+w::H' 00:13:32.985 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '%e3PUbLQH$l6},3aV_Z"0F^mDPV"hX`xH:L?+w::H' nqn.2016-06.io.spdk:cnode22874 00:13:32.985 [2024-11-20 11:14:25.707100] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22874: invalid model number '%e3PUbLQH$l6},3aV_Z"0F^mDPV"hX`xH:L?+w::H' 00:13:33.245 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:33.245 { 00:13:33.245 "nqn": "nqn.2016-06.io.spdk:cnode22874", 00:13:33.245 "model_number": "%e3PUbLQH$l6},3aV_Z\"0F^mDPV\"hX`xH:L?+w::H", 00:13:33.245 "method": "nvmf_create_subsystem", 00:13:33.245 "req_id": 1 00:13:33.245 } 00:13:33.245 Got JSON-RPC error response 00:13:33.245 response: 00:13:33.245 { 00:13:33.245 "code": -32602, 00:13:33.245 "message": "Invalid MN %e3PUbLQH$l6},3aV_Z\"0F^mDPV\"hX`xH:L?+w::H" 00:13:33.245 }' 00:13:33.245 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:33.245 { 00:13:33.245 "nqn": "nqn.2016-06.io.spdk:cnode22874", 00:13:33.245 "model_number": "%e3PUbLQH$l6},3aV_Z\"0F^mDPV\"hX`xH:L?+w::H", 00:13:33.245 "method": "nvmf_create_subsystem", 00:13:33.245 "req_id": 1 00:13:33.245 } 00:13:33.245 Got JSON-RPC error response 00:13:33.245 response: 00:13:33.245 { 00:13:33.245 "code": -32602, 00:13:33.245 "message": "Invalid MN %e3PUbLQH$l6},3aV_Z\"0F^mDPV\"hX`xH:L?+w::H" 00:13:33.245 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:33.245 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:33.245 [2024-11-20 11:14:25.903837] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:33.245 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:33.506 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:33.506 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:33.506 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:33.506 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:33.506 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:33.767 [2024-11-20 11:14:26.289022] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:33.767 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:33.767 { 00:13:33.767 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:33.767 "listen_address": { 00:13:33.767 "trtype": "tcp", 00:13:33.767 "traddr": "", 00:13:33.767 "trsvcid": "4421" 00:13:33.767 }, 00:13:33.767 "method": "nvmf_subsystem_remove_listener", 00:13:33.767 "req_id": 1 00:13:33.767 } 00:13:33.767 Got JSON-RPC error response 00:13:33.767 response: 00:13:33.767 { 00:13:33.767 "code": -32602, 00:13:33.767 "message": "Invalid parameters" 00:13:33.767 }' 00:13:33.767 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:33.767 { 00:13:33.767 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:33.767 "listen_address": { 00:13:33.767 "trtype": "tcp", 00:13:33.767 "traddr": "", 00:13:33.767 "trsvcid": "4421" 00:13:33.767 }, 00:13:33.767 "method": "nvmf_subsystem_remove_listener", 00:13:33.767 "req_id": 1 00:13:33.767 } 00:13:33.767 Got JSON-RPC error response 00:13:33.767 response: 00:13:33.767 { 00:13:33.767 "code": -32602, 00:13:33.767 "message": "Invalid parameters" 00:13:33.767 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:33.767 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6839 -i 0 00:13:33.767 [2024-11-20 11:14:26.477575] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6839: invalid cntlid range [0-65519] 00:13:34.028 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:34.028 { 00:13:34.028 "nqn": "nqn.2016-06.io.spdk:cnode6839", 00:13:34.028 "min_cntlid": 0, 00:13:34.028 "method": "nvmf_create_subsystem", 00:13:34.028 "req_id": 1 00:13:34.028 } 00:13:34.028 Got JSON-RPC error response 00:13:34.028 response: 00:13:34.028 { 00:13:34.028 "code": -32602, 00:13:34.028 "message": "Invalid cntlid range [0-65519]" 00:13:34.028 }' 00:13:34.028 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:34.028 { 00:13:34.028 "nqn": "nqn.2016-06.io.spdk:cnode6839", 00:13:34.028 "min_cntlid": 0, 00:13:34.028 "method": "nvmf_create_subsystem", 00:13:34.028 "req_id": 1 00:13:34.028 } 00:13:34.028 Got JSON-RPC error response 00:13:34.028 response: 00:13:34.028 { 00:13:34.028 "code": -32602, 00:13:34.028 "message": "Invalid cntlid range [0-65519]" 00:13:34.028 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:34.028 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14108 -i 65520 00:13:34.028 [2024-11-20 11:14:26.662154] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14108: invalid cntlid range [65520-65519] 00:13:34.028 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:34.028 { 00:13:34.028 "nqn": "nqn.2016-06.io.spdk:cnode14108", 00:13:34.028 "min_cntlid": 65520, 00:13:34.028 "method": "nvmf_create_subsystem", 00:13:34.028 "req_id": 1 00:13:34.028 } 00:13:34.028 Got JSON-RPC error response 00:13:34.028 response: 00:13:34.028 { 00:13:34.028 "code": -32602, 00:13:34.028 "message": "Invalid cntlid range [65520-65519]" 00:13:34.028 }' 00:13:34.028 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:34.028 { 00:13:34.028 "nqn": "nqn.2016-06.io.spdk:cnode14108", 00:13:34.028 "min_cntlid": 65520, 00:13:34.028 "method": "nvmf_create_subsystem", 00:13:34.028 "req_id": 1 00:13:34.028 } 00:13:34.028 Got JSON-RPC error response 00:13:34.028 response: 00:13:34.028 { 00:13:34.028 "code": -32602, 00:13:34.028 "message": "Invalid cntlid range [65520-65519]" 00:13:34.029 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:34.029 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22532 -I 0 00:13:34.290 [2024-11-20 11:14:26.850762] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22532: invalid cntlid range [1-0] 00:13:34.290 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:34.290 { 00:13:34.290 "nqn": "nqn.2016-06.io.spdk:cnode22532", 00:13:34.290 "max_cntlid": 0, 00:13:34.290 "method": "nvmf_create_subsystem", 00:13:34.290 "req_id": 1 00:13:34.290 } 00:13:34.290 Got JSON-RPC error response 00:13:34.290 response: 00:13:34.290 { 00:13:34.290 "code": -32602, 00:13:34.290 "message": "Invalid cntlid range [1-0]" 00:13:34.290 }' 00:13:34.290 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:34.290 { 00:13:34.290 "nqn": "nqn.2016-06.io.spdk:cnode22532", 00:13:34.290 "max_cntlid": 0, 00:13:34.290 "method": "nvmf_create_subsystem", 00:13:34.290 "req_id": 1 00:13:34.290 } 00:13:34.290 Got JSON-RPC error response 00:13:34.290 response: 00:13:34.290 { 00:13:34.290 "code": -32602, 00:13:34.290 "message": "Invalid cntlid range [1-0]" 00:13:34.290 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:34.290 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17931 -I 65520 00:13:34.551 [2024-11-20 11:14:27.039364] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17931: invalid cntlid range [1-65520] 00:13:34.551 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:34.551 { 00:13:34.551 "nqn": "nqn.2016-06.io.spdk:cnode17931", 00:13:34.551 "max_cntlid": 65520, 00:13:34.551 "method": "nvmf_create_subsystem", 00:13:34.551 "req_id": 1 00:13:34.551 } 00:13:34.551 Got JSON-RPC error response 00:13:34.551 response: 00:13:34.551 { 00:13:34.551 "code": -32602, 00:13:34.551 "message": "Invalid cntlid range [1-65520]" 00:13:34.551 }' 00:13:34.551 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:34.551 { 00:13:34.551 "nqn": "nqn.2016-06.io.spdk:cnode17931", 00:13:34.551 "max_cntlid": 65520, 00:13:34.551 "method": "nvmf_create_subsystem", 00:13:34.551 "req_id": 1 00:13:34.551 } 00:13:34.551 Got JSON-RPC error response 00:13:34.551 response: 00:13:34.551 { 00:13:34.551 "code": -32602, 00:13:34.551 "message": "Invalid cntlid range [1-65520]" 00:13:34.551 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:34.551 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14551 -i 6 -I 5 00:13:34.551 [2024-11-20 11:14:27.223991] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14551: invalid cntlid range [6-5] 00:13:34.551 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:34.551 { 00:13:34.551 "nqn": "nqn.2016-06.io.spdk:cnode14551", 00:13:34.551 "min_cntlid": 6, 00:13:34.551 "max_cntlid": 5, 00:13:34.551 "method": "nvmf_create_subsystem", 00:13:34.551 "req_id": 1 00:13:34.551 } 00:13:34.551 Got JSON-RPC error response 00:13:34.551 response: 00:13:34.551 { 00:13:34.551 "code": -32602, 00:13:34.551 "message": "Invalid cntlid range [6-5]" 00:13:34.551 }' 00:13:34.551 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:34.551 { 00:13:34.551 "nqn": "nqn.2016-06.io.spdk:cnode14551", 00:13:34.551 "min_cntlid": 6, 00:13:34.551 "max_cntlid": 5, 00:13:34.551 "method": "nvmf_create_subsystem", 00:13:34.551 "req_id": 1 00:13:34.551 } 00:13:34.551 Got JSON-RPC error response 00:13:34.551 response: 00:13:34.551 { 00:13:34.551 "code": -32602, 00:13:34.551 "message": "Invalid cntlid range [6-5]" 00:13:34.551 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:34.551 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:34.812 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:34.812 { 00:13:34.812 "name": "foobar", 00:13:34.812 "method": "nvmf_delete_target", 00:13:34.812 "req_id": 1 00:13:34.812 } 00:13:34.812 Got JSON-RPC error response 00:13:34.812 response: 00:13:34.812 { 00:13:34.812 "code": -32602, 00:13:34.812 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:34.812 }' 00:13:34.812 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:34.812 { 00:13:34.812 "name": "foobar", 00:13:34.812 "method": "nvmf_delete_target", 00:13:34.812 "req_id": 1 00:13:34.812 } 00:13:34.812 Got JSON-RPC error response 00:13:34.812 response: 00:13:34.812 { 00:13:34.812 "code": -32602, 00:13:34.812 "message": "The specified target doesn't exist, cannot delete it." 00:13:34.812 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:34.812 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:34.812 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:34.812 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:34.812 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:34.812 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:34.812 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:34.812 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:34.812 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:34.812 rmmod nvme_tcp 00:13:34.812 rmmod nvme_fabrics 00:13:34.812 rmmod nvme_keyring 00:13:34.812 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:34.812 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:34.812 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:34.812 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2657705 ']' 00:13:34.812 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2657705 00:13:34.812 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2657705 ']' 00:13:34.812 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2657705 00:13:34.812 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:13:34.812 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:34.812 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2657705 00:13:34.812 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:34.812 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:34.812 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2657705' 00:13:34.812 killing process with pid 2657705 00:13:34.812 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2657705 00:13:34.812 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2657705 00:13:35.074 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:35.074 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:35.074 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:35.074 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:35.074 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:13:35.074 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:35.074 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:13:35.074 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:35.074 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:35.074 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.074 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:35.074 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.989 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:36.989 00:13:36.989 real 0m14.191s 00:13:36.989 user 0m21.125s 00:13:36.989 sys 0m6.724s 00:13:36.989 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.989 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:36.989 ************************************ 00:13:36.989 END TEST nvmf_invalid 00:13:36.989 ************************************ 00:13:37.250 11:14:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:37.250 11:14:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:37.250 11:14:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:37.250 11:14:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:37.250 ************************************ 00:13:37.250 START TEST nvmf_connect_stress 00:13:37.250 ************************************ 00:13:37.250 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:37.250 * Looking for test storage... 00:13:37.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:37.250 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:37.250 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:13:37.250 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:37.250 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:37.250 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:37.250 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:37.250 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:37.250 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:37.250 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:37.250 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:37.250 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:37.250 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:37.250 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:37.250 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:37.250 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:37.250 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:37.250 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:37.250 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:37.250 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:37.250 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:37.250 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:37.250 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:37.250 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:37.250 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:37.250 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:37.250 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:37.250 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:37.250 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:37.250 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:37.250 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:37.250 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:37.251 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:37.251 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:37.251 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:37.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.251 --rc genhtml_branch_coverage=1 00:13:37.251 --rc genhtml_function_coverage=1 00:13:37.251 --rc genhtml_legend=1 00:13:37.251 --rc geninfo_all_blocks=1 00:13:37.251 --rc geninfo_unexecuted_blocks=1 00:13:37.251 00:13:37.251 ' 00:13:37.251 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:37.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.251 --rc genhtml_branch_coverage=1 00:13:37.251 --rc genhtml_function_coverage=1 00:13:37.251 --rc genhtml_legend=1 00:13:37.251 --rc geninfo_all_blocks=1 00:13:37.251 --rc geninfo_unexecuted_blocks=1 00:13:37.251 00:13:37.251 ' 00:13:37.251 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:37.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.251 --rc genhtml_branch_coverage=1 00:13:37.251 --rc genhtml_function_coverage=1 00:13:37.251 --rc genhtml_legend=1 00:13:37.251 --rc geninfo_all_blocks=1 00:13:37.251 --rc geninfo_unexecuted_blocks=1 00:13:37.251 00:13:37.251 ' 00:13:37.251 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:37.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.251 --rc genhtml_branch_coverage=1 00:13:37.251 --rc genhtml_function_coverage=1 00:13:37.251 --rc genhtml_legend=1 00:13:37.251 --rc geninfo_all_blocks=1 00:13:37.251 --rc geninfo_unexecuted_blocks=1 00:13:37.251 00:13:37.251 ' 00:13:37.251 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:37.251 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:37.512 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:37.512 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:37.512 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:37.512 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:37.512 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:37.512 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:37.512 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:37.513 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:37.513 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:37.513 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:37.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:37.513 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:45.656 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:45.656 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:45.656 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:45.656 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:45.656 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:45.657 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:45.657 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:45.657 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:45.657 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:45.657 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:45.657 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:45.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:45.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:13:45.657 00:13:45.657 --- 10.0.0.2 ping statistics --- 00:13:45.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.657 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:13:45.657 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:45.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:45.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:13:45.657 00:13:45.657 --- 10.0.0.1 ping statistics --- 00:13:45.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.657 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:13:45.657 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:45.657 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:45.657 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:45.657 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:45.657 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:45.657 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:45.657 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:45.657 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:45.657 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:45.657 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:45.657 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:45.657 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:45.657 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.657 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2662921 00:13:45.657 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2662921 00:13:45.657 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:45.657 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2662921 ']' 00:13:45.657 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.657 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:45.657 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.657 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:45.657 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.657 [2024-11-20 11:14:37.596952] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:13:45.657 [2024-11-20 11:14:37.597017] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.657 [2024-11-20 11:14:37.696616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:45.657 [2024-11-20 11:14:37.748168] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.657 [2024-11-20 11:14:37.748219] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.657 [2024-11-20 11:14:37.748231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:45.657 [2024-11-20 11:14:37.748239] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:45.657 [2024-11-20 11:14:37.748245] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.657 [2024-11-20 11:14:37.750053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:45.657 [2024-11-20 11:14:37.750217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.657 [2024-11-20 11:14:37.750218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.919 [2024-11-20 11:14:38.481492] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.919 [2024-11-20 11:14:38.507059] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.919 NULL1 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2662969 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662969 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.919 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.491 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.491 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662969 00:13:46.491 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.491 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.491 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.752 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.752 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662969 00:13:46.752 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.752 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.752 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.012 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.012 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662969 00:13:47.012 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.012 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.012 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.274 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.274 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662969 00:13:47.274 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.274 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.274 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.535 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.535 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662969 00:13:47.535 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.535 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.535 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.109 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.109 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662969 00:13:48.109 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.109 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.109 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.370 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.370 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662969 00:13:48.370 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.370 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.370 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.632 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.632 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662969 00:13:48.633 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.633 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.633 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.893 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.893 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662969 00:13:48.893 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.893 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.893 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.154 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.154 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662969 00:13:49.154 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.154 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.154 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.723 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.723 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662969 00:13:49.723 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.723 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.723 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.983 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.983 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662969 00:13:49.983 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.983 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.983 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.243 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.243 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662969 00:13:50.243 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.243 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.243 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.503 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.503 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662969 00:13:50.503 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.503 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.503 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.072 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.072 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662969 00:13:51.072 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.072 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.072 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.332 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.332 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662969 00:13:51.332 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.332 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.332 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.594 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.594 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662969 00:13:51.594 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.594 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.594 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.855 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.855 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662969 00:13:51.855 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.855 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.855 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.116 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.116 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662969 00:13:52.116 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.116 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.116 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.709 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.709 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662969 00:13:52.709 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.709 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.709 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.023 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.023 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662969 00:13:53.023 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.023 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.023 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.291 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.291 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662969 00:13:53.291 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.291 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.291 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.552 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.552 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662969 00:13:53.552 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.552 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.552 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.812 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.812 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662969 00:13:53.812 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.812 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.812 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.072 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.072 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662969 00:13:54.072 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.072 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.072 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.644 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.644 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662969 00:13:54.644 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.644 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.644 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.910 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.910 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662969 00:13:54.910 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.910 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.910 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.177 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.177 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662969 00:13:55.177 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.177 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.177 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.438 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.438 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662969 00:13:55.438 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.438 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.438 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.699 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.699 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662969 00:13:55.699 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.699 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.699 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.959 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:56.219 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.219 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2662969 00:13:56.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2662969) - No such process 00:13:56.219 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2662969 00:13:56.220 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:56.220 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:56.220 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:56.220 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:56.220 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:56.220 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:56.220 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:56.220 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:56.220 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:56.220 rmmod nvme_tcp 00:13:56.220 rmmod nvme_fabrics 00:13:56.220 rmmod nvme_keyring 00:13:56.220 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:56.220 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:56.220 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:56.220 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2662921 ']' 00:13:56.220 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2662921 00:13:56.220 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2662921 ']' 00:13:56.220 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2662921 00:13:56.220 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:56.220 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:56.220 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2662921 00:13:56.220 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:56.220 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:56.220 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2662921' 00:13:56.220 killing process with pid 2662921 00:13:56.220 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2662921 00:13:56.220 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2662921 00:13:56.481 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:56.481 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:56.481 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:56.481 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:56.481 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:56.481 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:56.481 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:56.481 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:56.481 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:56.481 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.481 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:56.481 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.394 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:58.394 00:13:58.394 real 0m21.267s 00:13:58.394 user 0m42.317s 00:13:58.394 sys 0m9.293s 00:13:58.394 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:58.394 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.394 ************************************ 00:13:58.394 END TEST nvmf_connect_stress 00:13:58.394 ************************************ 00:13:58.394 11:14:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:58.394 11:14:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:58.394 11:14:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:58.394 11:14:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:58.656 ************************************ 00:13:58.656 START TEST nvmf_fused_ordering 00:13:58.656 ************************************ 00:13:58.656 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:58.656 * Looking for test storage... 00:13:58.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:58.656 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:58.656 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:13:58.656 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:58.656 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:58.656 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:58.656 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:58.656 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:58.656 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:58.656 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:58.656 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:58.656 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:58.656 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:58.656 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:58.656 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:58.656 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:58.656 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:58.656 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:58.656 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:58.656 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:58.656 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:58.656 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:58.656 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:58.656 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:58.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.657 --rc genhtml_branch_coverage=1 00:13:58.657 --rc genhtml_function_coverage=1 00:13:58.657 --rc genhtml_legend=1 00:13:58.657 --rc geninfo_all_blocks=1 00:13:58.657 --rc geninfo_unexecuted_blocks=1 00:13:58.657 00:13:58.657 ' 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:58.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.657 --rc genhtml_branch_coverage=1 00:13:58.657 --rc genhtml_function_coverage=1 00:13:58.657 --rc genhtml_legend=1 00:13:58.657 --rc geninfo_all_blocks=1 00:13:58.657 --rc geninfo_unexecuted_blocks=1 00:13:58.657 00:13:58.657 ' 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:58.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.657 --rc genhtml_branch_coverage=1 00:13:58.657 --rc genhtml_function_coverage=1 00:13:58.657 --rc genhtml_legend=1 00:13:58.657 --rc geninfo_all_blocks=1 00:13:58.657 --rc geninfo_unexecuted_blocks=1 00:13:58.657 00:13:58.657 ' 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:58.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.657 --rc genhtml_branch_coverage=1 00:13:58.657 --rc genhtml_function_coverage=1 00:13:58.657 --rc genhtml_legend=1 00:13:58.657 --rc geninfo_all_blocks=1 00:13:58.657 --rc geninfo_unexecuted_blocks=1 00:13:58.657 00:13:58.657 ' 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:58.657 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:58.657 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:06.796 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:06.796 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:06.796 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:06.796 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:06.796 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:06.797 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:06.797 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:14:06.797 00:14:06.797 --- 10.0.0.2 ping statistics --- 00:14:06.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.797 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:06.797 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:06.797 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:14:06.797 00:14:06.797 --- 10.0.0.1 ping statistics --- 00:14:06.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.797 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2669329 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2669329 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2669329 ']' 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:06.797 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:06.797 [2024-11-20 11:14:58.935594] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:14:06.797 [2024-11-20 11:14:58.935661] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.797 [2024-11-20 11:14:59.034043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.797 [2024-11-20 11:14:59.083685] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:06.797 [2024-11-20 11:14:59.083739] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:06.797 [2024-11-20 11:14:59.083748] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:06.797 [2024-11-20 11:14:59.083755] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:06.797 [2024-11-20 11:14:59.083761] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:06.797 [2024-11-20 11:14:59.084519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:07.059 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:07.059 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:14:07.059 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:07.059 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:07.059 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:07.059 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:07.059 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:07.059 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.059 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:07.320 [2024-11-20 11:14:59.802037] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:07.320 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.320 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:07.320 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.320 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:07.320 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.320 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:07.320 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.320 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:07.320 [2024-11-20 11:14:59.826304] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:07.320 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.320 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:07.320 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.320 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:07.320 NULL1 00:14:07.320 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.320 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:07.320 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.320 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:07.320 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.320 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:07.320 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.320 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:07.320 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.320 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:07.320 [2024-11-20 11:14:59.896366] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:14:07.320 [2024-11-20 11:14:59.896411] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2669414 ] 00:14:07.892 Attached to nqn.2016-06.io.spdk:cnode1 00:14:07.892 Namespace ID: 1 size: 1GB 00:14:07.892 fused_ordering(0) 00:14:07.892 fused_ordering(1) 00:14:07.892 fused_ordering(2) 00:14:07.892 fused_ordering(3) 00:14:07.892 fused_ordering(4) 00:14:07.892 fused_ordering(5) 00:14:07.892 fused_ordering(6) 00:14:07.892 fused_ordering(7) 00:14:07.892 fused_ordering(8) 00:14:07.892 fused_ordering(9) 00:14:07.892 fused_ordering(10) 00:14:07.892 fused_ordering(11) 00:14:07.892 fused_ordering(12) 00:14:07.892 fused_ordering(13) 00:14:07.892 fused_ordering(14) 00:14:07.892 fused_ordering(15) 00:14:07.892 fused_ordering(16) 00:14:07.892 fused_ordering(17) 00:14:07.892 fused_ordering(18) 00:14:07.892 fused_ordering(19) 00:14:07.892 fused_ordering(20) 00:14:07.892 fused_ordering(21) 00:14:07.892 fused_ordering(22) 00:14:07.892 fused_ordering(23) 00:14:07.892 fused_ordering(24) 00:14:07.892 fused_ordering(25) 00:14:07.892 fused_ordering(26) 00:14:07.892 fused_ordering(27) 00:14:07.892 fused_ordering(28) 00:14:07.892 fused_ordering(29) 00:14:07.892 fused_ordering(30) 00:14:07.892 fused_ordering(31) 00:14:07.892 fused_ordering(32) 00:14:07.892 fused_ordering(33) 00:14:07.892 fused_ordering(34) 00:14:07.892 fused_ordering(35) 00:14:07.892 fused_ordering(36) 00:14:07.892 fused_ordering(37) 00:14:07.892 fused_ordering(38) 00:14:07.892 fused_ordering(39) 00:14:07.892 fused_ordering(40) 00:14:07.892 fused_ordering(41) 00:14:07.892 fused_ordering(42) 00:14:07.892 fused_ordering(43) 00:14:07.892 fused_ordering(44) 00:14:07.892 fused_ordering(45) 00:14:07.892 fused_ordering(46) 00:14:07.892 fused_ordering(47) 00:14:07.892 fused_ordering(48) 00:14:07.892 fused_ordering(49) 00:14:07.892 fused_ordering(50) 00:14:07.892 fused_ordering(51) 00:14:07.892 fused_ordering(52) 00:14:07.892 fused_ordering(53) 00:14:07.892 fused_ordering(54) 00:14:07.892 fused_ordering(55) 00:14:07.892 fused_ordering(56) 00:14:07.892 fused_ordering(57) 00:14:07.892 fused_ordering(58) 00:14:07.892 fused_ordering(59) 00:14:07.892 fused_ordering(60) 00:14:07.892 fused_ordering(61) 00:14:07.892 fused_ordering(62) 00:14:07.892 fused_ordering(63) 00:14:07.892 fused_ordering(64) 00:14:07.892 fused_ordering(65) 00:14:07.892 fused_ordering(66) 00:14:07.892 fused_ordering(67) 00:14:07.892 fused_ordering(68) 00:14:07.892 fused_ordering(69) 00:14:07.892 fused_ordering(70) 00:14:07.892 fused_ordering(71) 00:14:07.892 fused_ordering(72) 00:14:07.892 fused_ordering(73) 00:14:07.892 fused_ordering(74) 00:14:07.892 fused_ordering(75) 00:14:07.892 fused_ordering(76) 00:14:07.892 fused_ordering(77) 00:14:07.892 fused_ordering(78) 00:14:07.892 fused_ordering(79) 00:14:07.892 fused_ordering(80) 00:14:07.892 fused_ordering(81) 00:14:07.892 fused_ordering(82) 00:14:07.892 fused_ordering(83) 00:14:07.892 fused_ordering(84) 00:14:07.892 fused_ordering(85) 00:14:07.892 fused_ordering(86) 00:14:07.892 fused_ordering(87) 00:14:07.892 fused_ordering(88) 00:14:07.892 fused_ordering(89) 00:14:07.892 fused_ordering(90) 00:14:07.892 fused_ordering(91) 00:14:07.892 fused_ordering(92) 00:14:07.892 fused_ordering(93) 00:14:07.892 fused_ordering(94) 00:14:07.892 fused_ordering(95) 00:14:07.892 fused_ordering(96) 00:14:07.892 fused_ordering(97) 00:14:07.892 fused_ordering(98) 00:14:07.892 fused_ordering(99) 00:14:07.892 fused_ordering(100) 00:14:07.892 fused_ordering(101) 00:14:07.892 fused_ordering(102) 00:14:07.892 fused_ordering(103) 00:14:07.892 fused_ordering(104) 00:14:07.892 fused_ordering(105) 00:14:07.892 fused_ordering(106) 00:14:07.892 fused_ordering(107) 00:14:07.892 fused_ordering(108) 00:14:07.892 fused_ordering(109) 00:14:07.892 fused_ordering(110) 00:14:07.892 fused_ordering(111) 00:14:07.892 fused_ordering(112) 00:14:07.892 fused_ordering(113) 00:14:07.892 fused_ordering(114) 00:14:07.892 fused_ordering(115) 00:14:07.892 fused_ordering(116) 00:14:07.892 fused_ordering(117) 00:14:07.892 fused_ordering(118) 00:14:07.892 fused_ordering(119) 00:14:07.892 fused_ordering(120) 00:14:07.892 fused_ordering(121) 00:14:07.892 fused_ordering(122) 00:14:07.892 fused_ordering(123) 00:14:07.892 fused_ordering(124) 00:14:07.892 fused_ordering(125) 00:14:07.892 fused_ordering(126) 00:14:07.892 fused_ordering(127) 00:14:07.892 fused_ordering(128) 00:14:07.892 fused_ordering(129) 00:14:07.892 fused_ordering(130) 00:14:07.892 fused_ordering(131) 00:14:07.892 fused_ordering(132) 00:14:07.892 fused_ordering(133) 00:14:07.892 fused_ordering(134) 00:14:07.892 fused_ordering(135) 00:14:07.892 fused_ordering(136) 00:14:07.892 fused_ordering(137) 00:14:07.892 fused_ordering(138) 00:14:07.893 fused_ordering(139) 00:14:07.893 fused_ordering(140) 00:14:07.893 fused_ordering(141) 00:14:07.893 fused_ordering(142) 00:14:07.893 fused_ordering(143) 00:14:07.893 fused_ordering(144) 00:14:07.893 fused_ordering(145) 00:14:07.893 fused_ordering(146) 00:14:07.893 fused_ordering(147) 00:14:07.893 fused_ordering(148) 00:14:07.893 fused_ordering(149) 00:14:07.893 fused_ordering(150) 00:14:07.893 fused_ordering(151) 00:14:07.893 fused_ordering(152) 00:14:07.893 fused_ordering(153) 00:14:07.893 fused_ordering(154) 00:14:07.893 fused_ordering(155) 00:14:07.893 fused_ordering(156) 00:14:07.893 fused_ordering(157) 00:14:07.893 fused_ordering(158) 00:14:07.893 fused_ordering(159) 00:14:07.893 fused_ordering(160) 00:14:07.893 fused_ordering(161) 00:14:07.893 fused_ordering(162) 00:14:07.893 fused_ordering(163) 00:14:07.893 fused_ordering(164) 00:14:07.893 fused_ordering(165) 00:14:07.893 fused_ordering(166) 00:14:07.893 fused_ordering(167) 00:14:07.893 fused_ordering(168) 00:14:07.893 fused_ordering(169) 00:14:07.893 fused_ordering(170) 00:14:07.893 fused_ordering(171) 00:14:07.893 fused_ordering(172) 00:14:07.893 fused_ordering(173) 00:14:07.893 fused_ordering(174) 00:14:07.893 fused_ordering(175) 00:14:07.893 fused_ordering(176) 00:14:07.893 fused_ordering(177) 00:14:07.893 fused_ordering(178) 00:14:07.893 fused_ordering(179) 00:14:07.893 fused_ordering(180) 00:14:07.893 fused_ordering(181) 00:14:07.893 fused_ordering(182) 00:14:07.893 fused_ordering(183) 00:14:07.893 fused_ordering(184) 00:14:07.893 fused_ordering(185) 00:14:07.893 fused_ordering(186) 00:14:07.893 fused_ordering(187) 00:14:07.893 fused_ordering(188) 00:14:07.893 fused_ordering(189) 00:14:07.893 fused_ordering(190) 00:14:07.893 fused_ordering(191) 00:14:07.893 fused_ordering(192) 00:14:07.893 fused_ordering(193) 00:14:07.893 fused_ordering(194) 00:14:07.893 fused_ordering(195) 00:14:07.893 fused_ordering(196) 00:14:07.893 fused_ordering(197) 00:14:07.893 fused_ordering(198) 00:14:07.893 fused_ordering(199) 00:14:07.893 fused_ordering(200) 00:14:07.893 fused_ordering(201) 00:14:07.893 fused_ordering(202) 00:14:07.893 fused_ordering(203) 00:14:07.893 fused_ordering(204) 00:14:07.893 fused_ordering(205) 00:14:08.154 fused_ordering(206) 00:14:08.154 fused_ordering(207) 00:14:08.154 fused_ordering(208) 00:14:08.154 fused_ordering(209) 00:14:08.154 fused_ordering(210) 00:14:08.154 fused_ordering(211) 00:14:08.154 fused_ordering(212) 00:14:08.154 fused_ordering(213) 00:14:08.154 fused_ordering(214) 00:14:08.154 fused_ordering(215) 00:14:08.154 fused_ordering(216) 00:14:08.154 fused_ordering(217) 00:14:08.154 fused_ordering(218) 00:14:08.154 fused_ordering(219) 00:14:08.154 fused_ordering(220) 00:14:08.154 fused_ordering(221) 00:14:08.154 fused_ordering(222) 00:14:08.154 fused_ordering(223) 00:14:08.154 fused_ordering(224) 00:14:08.154 fused_ordering(225) 00:14:08.154 fused_ordering(226) 00:14:08.154 fused_ordering(227) 00:14:08.154 fused_ordering(228) 00:14:08.154 fused_ordering(229) 00:14:08.154 fused_ordering(230) 00:14:08.154 fused_ordering(231) 00:14:08.154 fused_ordering(232) 00:14:08.154 fused_ordering(233) 00:14:08.154 fused_ordering(234) 00:14:08.154 fused_ordering(235) 00:14:08.154 fused_ordering(236) 00:14:08.154 fused_ordering(237) 00:14:08.154 fused_ordering(238) 00:14:08.154 fused_ordering(239) 00:14:08.154 fused_ordering(240) 00:14:08.154 fused_ordering(241) 00:14:08.154 fused_ordering(242) 00:14:08.154 fused_ordering(243) 00:14:08.154 fused_ordering(244) 00:14:08.154 fused_ordering(245) 00:14:08.154 fused_ordering(246) 00:14:08.154 fused_ordering(247) 00:14:08.154 fused_ordering(248) 00:14:08.154 fused_ordering(249) 00:14:08.154 fused_ordering(250) 00:14:08.154 fused_ordering(251) 00:14:08.154 fused_ordering(252) 00:14:08.154 fused_ordering(253) 00:14:08.154 fused_ordering(254) 00:14:08.154 fused_ordering(255) 00:14:08.154 fused_ordering(256) 00:14:08.154 fused_ordering(257) 00:14:08.154 fused_ordering(258) 00:14:08.154 fused_ordering(259) 00:14:08.154 fused_ordering(260) 00:14:08.154 fused_ordering(261) 00:14:08.154 fused_ordering(262) 00:14:08.154 fused_ordering(263) 00:14:08.154 fused_ordering(264) 00:14:08.154 fused_ordering(265) 00:14:08.154 fused_ordering(266) 00:14:08.154 fused_ordering(267) 00:14:08.154 fused_ordering(268) 00:14:08.154 fused_ordering(269) 00:14:08.154 fused_ordering(270) 00:14:08.154 fused_ordering(271) 00:14:08.154 fused_ordering(272) 00:14:08.154 fused_ordering(273) 00:14:08.154 fused_ordering(274) 00:14:08.154 fused_ordering(275) 00:14:08.154 fused_ordering(276) 00:14:08.154 fused_ordering(277) 00:14:08.154 fused_ordering(278) 00:14:08.154 fused_ordering(279) 00:14:08.154 fused_ordering(280) 00:14:08.154 fused_ordering(281) 00:14:08.154 fused_ordering(282) 00:14:08.154 fused_ordering(283) 00:14:08.154 fused_ordering(284) 00:14:08.154 fused_ordering(285) 00:14:08.154 fused_ordering(286) 00:14:08.154 fused_ordering(287) 00:14:08.154 fused_ordering(288) 00:14:08.154 fused_ordering(289) 00:14:08.154 fused_ordering(290) 00:14:08.154 fused_ordering(291) 00:14:08.154 fused_ordering(292) 00:14:08.154 fused_ordering(293) 00:14:08.154 fused_ordering(294) 00:14:08.154 fused_ordering(295) 00:14:08.154 fused_ordering(296) 00:14:08.154 fused_ordering(297) 00:14:08.154 fused_ordering(298) 00:14:08.154 fused_ordering(299) 00:14:08.154 fused_ordering(300) 00:14:08.154 fused_ordering(301) 00:14:08.154 fused_ordering(302) 00:14:08.154 fused_ordering(303) 00:14:08.154 fused_ordering(304) 00:14:08.154 fused_ordering(305) 00:14:08.154 fused_ordering(306) 00:14:08.154 fused_ordering(307) 00:14:08.154 fused_ordering(308) 00:14:08.154 fused_ordering(309) 00:14:08.154 fused_ordering(310) 00:14:08.154 fused_ordering(311) 00:14:08.154 fused_ordering(312) 00:14:08.154 fused_ordering(313) 00:14:08.154 fused_ordering(314) 00:14:08.154 fused_ordering(315) 00:14:08.154 fused_ordering(316) 00:14:08.154 fused_ordering(317) 00:14:08.154 fused_ordering(318) 00:14:08.154 fused_ordering(319) 00:14:08.154 fused_ordering(320) 00:14:08.154 fused_ordering(321) 00:14:08.154 fused_ordering(322) 00:14:08.154 fused_ordering(323) 00:14:08.154 fused_ordering(324) 00:14:08.154 fused_ordering(325) 00:14:08.154 fused_ordering(326) 00:14:08.154 fused_ordering(327) 00:14:08.154 fused_ordering(328) 00:14:08.154 fused_ordering(329) 00:14:08.154 fused_ordering(330) 00:14:08.154 fused_ordering(331) 00:14:08.154 fused_ordering(332) 00:14:08.154 fused_ordering(333) 00:14:08.154 fused_ordering(334) 00:14:08.154 fused_ordering(335) 00:14:08.154 fused_ordering(336) 00:14:08.154 fused_ordering(337) 00:14:08.154 fused_ordering(338) 00:14:08.154 fused_ordering(339) 00:14:08.154 fused_ordering(340) 00:14:08.154 fused_ordering(341) 00:14:08.154 fused_ordering(342) 00:14:08.154 fused_ordering(343) 00:14:08.154 fused_ordering(344) 00:14:08.154 fused_ordering(345) 00:14:08.154 fused_ordering(346) 00:14:08.154 fused_ordering(347) 00:14:08.154 fused_ordering(348) 00:14:08.154 fused_ordering(349) 00:14:08.154 fused_ordering(350) 00:14:08.154 fused_ordering(351) 00:14:08.154 fused_ordering(352) 00:14:08.154 fused_ordering(353) 00:14:08.154 fused_ordering(354) 00:14:08.154 fused_ordering(355) 00:14:08.154 fused_ordering(356) 00:14:08.154 fused_ordering(357) 00:14:08.154 fused_ordering(358) 00:14:08.154 fused_ordering(359) 00:14:08.154 fused_ordering(360) 00:14:08.154 fused_ordering(361) 00:14:08.154 fused_ordering(362) 00:14:08.154 fused_ordering(363) 00:14:08.155 fused_ordering(364) 00:14:08.155 fused_ordering(365) 00:14:08.155 fused_ordering(366) 00:14:08.155 fused_ordering(367) 00:14:08.155 fused_ordering(368) 00:14:08.155 fused_ordering(369) 00:14:08.155 fused_ordering(370) 00:14:08.155 fused_ordering(371) 00:14:08.155 fused_ordering(372) 00:14:08.155 fused_ordering(373) 00:14:08.155 fused_ordering(374) 00:14:08.155 fused_ordering(375) 00:14:08.155 fused_ordering(376) 00:14:08.155 fused_ordering(377) 00:14:08.155 fused_ordering(378) 00:14:08.155 fused_ordering(379) 00:14:08.155 fused_ordering(380) 00:14:08.155 fused_ordering(381) 00:14:08.155 fused_ordering(382) 00:14:08.155 fused_ordering(383) 00:14:08.155 fused_ordering(384) 00:14:08.155 fused_ordering(385) 00:14:08.155 fused_ordering(386) 00:14:08.155 fused_ordering(387) 00:14:08.155 fused_ordering(388) 00:14:08.155 fused_ordering(389) 00:14:08.155 fused_ordering(390) 00:14:08.155 fused_ordering(391) 00:14:08.155 fused_ordering(392) 00:14:08.155 fused_ordering(393) 00:14:08.155 fused_ordering(394) 00:14:08.155 fused_ordering(395) 00:14:08.155 fused_ordering(396) 00:14:08.155 fused_ordering(397) 00:14:08.155 fused_ordering(398) 00:14:08.155 fused_ordering(399) 00:14:08.155 fused_ordering(400) 00:14:08.155 fused_ordering(401) 00:14:08.155 fused_ordering(402) 00:14:08.155 fused_ordering(403) 00:14:08.155 fused_ordering(404) 00:14:08.155 fused_ordering(405) 00:14:08.155 fused_ordering(406) 00:14:08.155 fused_ordering(407) 00:14:08.155 fused_ordering(408) 00:14:08.155 fused_ordering(409) 00:14:08.155 fused_ordering(410) 00:14:08.725 fused_ordering(411) 00:14:08.725 fused_ordering(412) 00:14:08.725 fused_ordering(413) 00:14:08.725 fused_ordering(414) 00:14:08.725 fused_ordering(415) 00:14:08.725 fused_ordering(416) 00:14:08.725 fused_ordering(417) 00:14:08.725 fused_ordering(418) 00:14:08.725 fused_ordering(419) 00:14:08.726 fused_ordering(420) 00:14:08.726 fused_ordering(421) 00:14:08.726 fused_ordering(422) 00:14:08.726 fused_ordering(423) 00:14:08.726 fused_ordering(424) 00:14:08.726 fused_ordering(425) 00:14:08.726 fused_ordering(426) 00:14:08.726 fused_ordering(427) 00:14:08.726 fused_ordering(428) 00:14:08.726 fused_ordering(429) 00:14:08.726 fused_ordering(430) 00:14:08.726 fused_ordering(431) 00:14:08.726 fused_ordering(432) 00:14:08.726 fused_ordering(433) 00:14:08.726 fused_ordering(434) 00:14:08.726 fused_ordering(435) 00:14:08.726 fused_ordering(436) 00:14:08.726 fused_ordering(437) 00:14:08.726 fused_ordering(438) 00:14:08.726 fused_ordering(439) 00:14:08.726 fused_ordering(440) 00:14:08.726 fused_ordering(441) 00:14:08.726 fused_ordering(442) 00:14:08.726 fused_ordering(443) 00:14:08.726 fused_ordering(444) 00:14:08.726 fused_ordering(445) 00:14:08.726 fused_ordering(446) 00:14:08.726 fused_ordering(447) 00:14:08.726 fused_ordering(448) 00:14:08.726 fused_ordering(449) 00:14:08.726 fused_ordering(450) 00:14:08.726 fused_ordering(451) 00:14:08.726 fused_ordering(452) 00:14:08.726 fused_ordering(453) 00:14:08.726 fused_ordering(454) 00:14:08.726 fused_ordering(455) 00:14:08.726 fused_ordering(456) 00:14:08.726 fused_ordering(457) 00:14:08.726 fused_ordering(458) 00:14:08.726 fused_ordering(459) 00:14:08.726 fused_ordering(460) 00:14:08.726 fused_ordering(461) 00:14:08.726 fused_ordering(462) 00:14:08.726 fused_ordering(463) 00:14:08.726 fused_ordering(464) 00:14:08.726 fused_ordering(465) 00:14:08.726 fused_ordering(466) 00:14:08.726 fused_ordering(467) 00:14:08.726 fused_ordering(468) 00:14:08.726 fused_ordering(469) 00:14:08.726 fused_ordering(470) 00:14:08.726 fused_ordering(471) 00:14:08.726 fused_ordering(472) 00:14:08.726 fused_ordering(473) 00:14:08.726 fused_ordering(474) 00:14:08.726 fused_ordering(475) 00:14:08.726 fused_ordering(476) 00:14:08.726 fused_ordering(477) 00:14:08.726 fused_ordering(478) 00:14:08.726 fused_ordering(479) 00:14:08.726 fused_ordering(480) 00:14:08.726 fused_ordering(481) 00:14:08.726 fused_ordering(482) 00:14:08.726 fused_ordering(483) 00:14:08.726 fused_ordering(484) 00:14:08.726 fused_ordering(485) 00:14:08.726 fused_ordering(486) 00:14:08.726 fused_ordering(487) 00:14:08.726 fused_ordering(488) 00:14:08.726 fused_ordering(489) 00:14:08.726 fused_ordering(490) 00:14:08.726 fused_ordering(491) 00:14:08.726 fused_ordering(492) 00:14:08.726 fused_ordering(493) 00:14:08.726 fused_ordering(494) 00:14:08.726 fused_ordering(495) 00:14:08.726 fused_ordering(496) 00:14:08.726 fused_ordering(497) 00:14:08.726 fused_ordering(498) 00:14:08.726 fused_ordering(499) 00:14:08.726 fused_ordering(500) 00:14:08.726 fused_ordering(501) 00:14:08.726 fused_ordering(502) 00:14:08.726 fused_ordering(503) 00:14:08.726 fused_ordering(504) 00:14:08.726 fused_ordering(505) 00:14:08.726 fused_ordering(506) 00:14:08.726 fused_ordering(507) 00:14:08.726 fused_ordering(508) 00:14:08.726 fused_ordering(509) 00:14:08.726 fused_ordering(510) 00:14:08.726 fused_ordering(511) 00:14:08.726 fused_ordering(512) 00:14:08.726 fused_ordering(513) 00:14:08.726 fused_ordering(514) 00:14:08.726 fused_ordering(515) 00:14:08.726 fused_ordering(516) 00:14:08.726 fused_ordering(517) 00:14:08.726 fused_ordering(518) 00:14:08.726 fused_ordering(519) 00:14:08.726 fused_ordering(520) 00:14:08.726 fused_ordering(521) 00:14:08.726 fused_ordering(522) 00:14:08.726 fused_ordering(523) 00:14:08.726 fused_ordering(524) 00:14:08.726 fused_ordering(525) 00:14:08.726 fused_ordering(526) 00:14:08.726 fused_ordering(527) 00:14:08.726 fused_ordering(528) 00:14:08.726 fused_ordering(529) 00:14:08.726 fused_ordering(530) 00:14:08.726 fused_ordering(531) 00:14:08.726 fused_ordering(532) 00:14:08.726 fused_ordering(533) 00:14:08.726 fused_ordering(534) 00:14:08.726 fused_ordering(535) 00:14:08.726 fused_ordering(536) 00:14:08.726 fused_ordering(537) 00:14:08.726 fused_ordering(538) 00:14:08.726 fused_ordering(539) 00:14:08.726 fused_ordering(540) 00:14:08.726 fused_ordering(541) 00:14:08.726 fused_ordering(542) 00:14:08.726 fused_ordering(543) 00:14:08.726 fused_ordering(544) 00:14:08.726 fused_ordering(545) 00:14:08.726 fused_ordering(546) 00:14:08.726 fused_ordering(547) 00:14:08.726 fused_ordering(548) 00:14:08.726 fused_ordering(549) 00:14:08.726 fused_ordering(550) 00:14:08.726 fused_ordering(551) 00:14:08.726 fused_ordering(552) 00:14:08.726 fused_ordering(553) 00:14:08.726 fused_ordering(554) 00:14:08.726 fused_ordering(555) 00:14:08.726 fused_ordering(556) 00:14:08.726 fused_ordering(557) 00:14:08.726 fused_ordering(558) 00:14:08.726 fused_ordering(559) 00:14:08.726 fused_ordering(560) 00:14:08.726 fused_ordering(561) 00:14:08.726 fused_ordering(562) 00:14:08.726 fused_ordering(563) 00:14:08.726 fused_ordering(564) 00:14:08.726 fused_ordering(565) 00:14:08.726 fused_ordering(566) 00:14:08.726 fused_ordering(567) 00:14:08.726 fused_ordering(568) 00:14:08.726 fused_ordering(569) 00:14:08.726 fused_ordering(570) 00:14:08.726 fused_ordering(571) 00:14:08.726 fused_ordering(572) 00:14:08.726 fused_ordering(573) 00:14:08.726 fused_ordering(574) 00:14:08.726 fused_ordering(575) 00:14:08.726 fused_ordering(576) 00:14:08.726 fused_ordering(577) 00:14:08.726 fused_ordering(578) 00:14:08.726 fused_ordering(579) 00:14:08.726 fused_ordering(580) 00:14:08.726 fused_ordering(581) 00:14:08.726 fused_ordering(582) 00:14:08.726 fused_ordering(583) 00:14:08.726 fused_ordering(584) 00:14:08.726 fused_ordering(585) 00:14:08.726 fused_ordering(586) 00:14:08.726 fused_ordering(587) 00:14:08.726 fused_ordering(588) 00:14:08.726 fused_ordering(589) 00:14:08.726 fused_ordering(590) 00:14:08.726 fused_ordering(591) 00:14:08.726 fused_ordering(592) 00:14:08.726 fused_ordering(593) 00:14:08.726 fused_ordering(594) 00:14:08.726 fused_ordering(595) 00:14:08.726 fused_ordering(596) 00:14:08.726 fused_ordering(597) 00:14:08.726 fused_ordering(598) 00:14:08.726 fused_ordering(599) 00:14:08.726 fused_ordering(600) 00:14:08.726 fused_ordering(601) 00:14:08.726 fused_ordering(602) 00:14:08.726 fused_ordering(603) 00:14:08.726 fused_ordering(604) 00:14:08.726 fused_ordering(605) 00:14:08.726 fused_ordering(606) 00:14:08.726 fused_ordering(607) 00:14:08.726 fused_ordering(608) 00:14:08.726 fused_ordering(609) 00:14:08.726 fused_ordering(610) 00:14:08.726 fused_ordering(611) 00:14:08.726 fused_ordering(612) 00:14:08.726 fused_ordering(613) 00:14:08.726 fused_ordering(614) 00:14:08.726 fused_ordering(615) 00:14:08.987 fused_ordering(616) 00:14:08.987 fused_ordering(617) 00:14:08.987 fused_ordering(618) 00:14:08.987 fused_ordering(619) 00:14:08.987 fused_ordering(620) 00:14:08.987 fused_ordering(621) 00:14:08.987 fused_ordering(622) 00:14:08.987 fused_ordering(623) 00:14:08.987 fused_ordering(624) 00:14:08.987 fused_ordering(625) 00:14:08.987 fused_ordering(626) 00:14:08.987 fused_ordering(627) 00:14:08.987 fused_ordering(628) 00:14:08.987 fused_ordering(629) 00:14:08.987 fused_ordering(630) 00:14:08.987 fused_ordering(631) 00:14:08.987 fused_ordering(632) 00:14:08.987 fused_ordering(633) 00:14:08.987 fused_ordering(634) 00:14:08.987 fused_ordering(635) 00:14:08.987 fused_ordering(636) 00:14:08.987 fused_ordering(637) 00:14:08.987 fused_ordering(638) 00:14:08.987 fused_ordering(639) 00:14:08.987 fused_ordering(640) 00:14:08.987 fused_ordering(641) 00:14:08.987 fused_ordering(642) 00:14:08.987 fused_ordering(643) 00:14:08.987 fused_ordering(644) 00:14:08.987 fused_ordering(645) 00:14:08.987 fused_ordering(646) 00:14:08.987 fused_ordering(647) 00:14:08.987 fused_ordering(648) 00:14:08.987 fused_ordering(649) 00:14:08.987 fused_ordering(650) 00:14:08.987 fused_ordering(651) 00:14:08.987 fused_ordering(652) 00:14:08.987 fused_ordering(653) 00:14:08.987 fused_ordering(654) 00:14:08.987 fused_ordering(655) 00:14:08.987 fused_ordering(656) 00:14:08.987 fused_ordering(657) 00:14:08.987 fused_ordering(658) 00:14:08.987 fused_ordering(659) 00:14:08.987 fused_ordering(660) 00:14:08.987 fused_ordering(661) 00:14:08.987 fused_ordering(662) 00:14:08.987 fused_ordering(663) 00:14:08.987 fused_ordering(664) 00:14:08.987 fused_ordering(665) 00:14:08.987 fused_ordering(666) 00:14:08.987 fused_ordering(667) 00:14:08.987 fused_ordering(668) 00:14:08.987 fused_ordering(669) 00:14:08.987 fused_ordering(670) 00:14:08.987 fused_ordering(671) 00:14:08.987 fused_ordering(672) 00:14:08.987 fused_ordering(673) 00:14:08.987 fused_ordering(674) 00:14:08.987 fused_ordering(675) 00:14:08.987 fused_ordering(676) 00:14:08.987 fused_ordering(677) 00:14:08.987 fused_ordering(678) 00:14:08.987 fused_ordering(679) 00:14:08.987 fused_ordering(680) 00:14:08.987 fused_ordering(681) 00:14:08.987 fused_ordering(682) 00:14:08.987 fused_ordering(683) 00:14:08.987 fused_ordering(684) 00:14:08.987 fused_ordering(685) 00:14:08.987 fused_ordering(686) 00:14:08.987 fused_ordering(687) 00:14:08.987 fused_ordering(688) 00:14:08.987 fused_ordering(689) 00:14:08.987 fused_ordering(690) 00:14:08.987 fused_ordering(691) 00:14:08.987 fused_ordering(692) 00:14:08.987 fused_ordering(693) 00:14:08.987 fused_ordering(694) 00:14:08.987 fused_ordering(695) 00:14:08.987 fused_ordering(696) 00:14:08.987 fused_ordering(697) 00:14:08.987 fused_ordering(698) 00:14:08.987 fused_ordering(699) 00:14:08.987 fused_ordering(700) 00:14:08.987 fused_ordering(701) 00:14:08.987 fused_ordering(702) 00:14:08.987 fused_ordering(703) 00:14:08.987 fused_ordering(704) 00:14:08.987 fused_ordering(705) 00:14:08.987 fused_ordering(706) 00:14:08.987 fused_ordering(707) 00:14:08.987 fused_ordering(708) 00:14:08.987 fused_ordering(709) 00:14:08.987 fused_ordering(710) 00:14:08.987 fused_ordering(711) 00:14:08.987 fused_ordering(712) 00:14:08.987 fused_ordering(713) 00:14:08.987 fused_ordering(714) 00:14:08.987 fused_ordering(715) 00:14:08.987 fused_ordering(716) 00:14:08.987 fused_ordering(717) 00:14:08.987 fused_ordering(718) 00:14:08.987 fused_ordering(719) 00:14:08.987 fused_ordering(720) 00:14:08.987 fused_ordering(721) 00:14:08.987 fused_ordering(722) 00:14:08.987 fused_ordering(723) 00:14:08.987 fused_ordering(724) 00:14:08.987 fused_ordering(725) 00:14:08.987 fused_ordering(726) 00:14:08.987 fused_ordering(727) 00:14:08.987 fused_ordering(728) 00:14:08.987 fused_ordering(729) 00:14:08.987 fused_ordering(730) 00:14:08.987 fused_ordering(731) 00:14:08.987 fused_ordering(732) 00:14:08.987 fused_ordering(733) 00:14:08.987 fused_ordering(734) 00:14:08.987 fused_ordering(735) 00:14:08.987 fused_ordering(736) 00:14:08.987 fused_ordering(737) 00:14:08.987 fused_ordering(738) 00:14:08.987 fused_ordering(739) 00:14:08.987 fused_ordering(740) 00:14:08.987 fused_ordering(741) 00:14:08.987 fused_ordering(742) 00:14:08.987 fused_ordering(743) 00:14:08.987 fused_ordering(744) 00:14:08.987 fused_ordering(745) 00:14:08.987 fused_ordering(746) 00:14:08.987 fused_ordering(747) 00:14:08.987 fused_ordering(748) 00:14:08.987 fused_ordering(749) 00:14:08.987 fused_ordering(750) 00:14:08.987 fused_ordering(751) 00:14:08.987 fused_ordering(752) 00:14:08.987 fused_ordering(753) 00:14:08.987 fused_ordering(754) 00:14:08.987 fused_ordering(755) 00:14:08.987 fused_ordering(756) 00:14:08.987 fused_ordering(757) 00:14:08.987 fused_ordering(758) 00:14:08.987 fused_ordering(759) 00:14:08.987 fused_ordering(760) 00:14:08.987 fused_ordering(761) 00:14:08.987 fused_ordering(762) 00:14:08.987 fused_ordering(763) 00:14:08.987 fused_ordering(764) 00:14:08.987 fused_ordering(765) 00:14:08.987 fused_ordering(766) 00:14:08.987 fused_ordering(767) 00:14:08.987 fused_ordering(768) 00:14:08.987 fused_ordering(769) 00:14:08.987 fused_ordering(770) 00:14:08.987 fused_ordering(771) 00:14:08.987 fused_ordering(772) 00:14:08.987 fused_ordering(773) 00:14:08.988 fused_ordering(774) 00:14:08.988 fused_ordering(775) 00:14:08.988 fused_ordering(776) 00:14:08.988 fused_ordering(777) 00:14:08.988 fused_ordering(778) 00:14:08.988 fused_ordering(779) 00:14:08.988 fused_ordering(780) 00:14:08.988 fused_ordering(781) 00:14:08.988 fused_ordering(782) 00:14:08.988 fused_ordering(783) 00:14:08.988 fused_ordering(784) 00:14:08.988 fused_ordering(785) 00:14:08.988 fused_ordering(786) 00:14:08.988 fused_ordering(787) 00:14:08.988 fused_ordering(788) 00:14:08.988 fused_ordering(789) 00:14:08.988 fused_ordering(790) 00:14:08.988 fused_ordering(791) 00:14:08.988 fused_ordering(792) 00:14:08.988 fused_ordering(793) 00:14:08.988 fused_ordering(794) 00:14:08.988 fused_ordering(795) 00:14:08.988 fused_ordering(796) 00:14:08.988 fused_ordering(797) 00:14:08.988 fused_ordering(798) 00:14:08.988 fused_ordering(799) 00:14:08.988 fused_ordering(800) 00:14:08.988 fused_ordering(801) 00:14:08.988 fused_ordering(802) 00:14:08.988 fused_ordering(803) 00:14:08.988 fused_ordering(804) 00:14:08.988 fused_ordering(805) 00:14:08.988 fused_ordering(806) 00:14:08.988 fused_ordering(807) 00:14:08.988 fused_ordering(808) 00:14:08.988 fused_ordering(809) 00:14:08.988 fused_ordering(810) 00:14:08.988 fused_ordering(811) 00:14:08.988 fused_ordering(812) 00:14:08.988 fused_ordering(813) 00:14:08.988 fused_ordering(814) 00:14:08.988 fused_ordering(815) 00:14:08.988 fused_ordering(816) 00:14:08.988 fused_ordering(817) 00:14:08.988 fused_ordering(818) 00:14:08.988 fused_ordering(819) 00:14:08.988 fused_ordering(820) 00:14:09.930 fused_ordering(821) 00:14:09.930 fused_ordering(822) 00:14:09.930 fused_ordering(823) 00:14:09.930 fused_ordering(824) 00:14:09.930 fused_ordering(825) 00:14:09.930 fused_ordering(826) 00:14:09.930 fused_ordering(827) 00:14:09.930 fused_ordering(828) 00:14:09.930 fused_ordering(829) 00:14:09.930 fused_ordering(830) 00:14:09.930 fused_ordering(831) 00:14:09.930 fused_ordering(832) 00:14:09.930 fused_ordering(833) 00:14:09.930 fused_ordering(834) 00:14:09.930 fused_ordering(835) 00:14:09.930 fused_ordering(836) 00:14:09.930 fused_ordering(837) 00:14:09.930 fused_ordering(838) 00:14:09.930 fused_ordering(839) 00:14:09.930 fused_ordering(840) 00:14:09.930 fused_ordering(841) 00:14:09.930 fused_ordering(842) 00:14:09.930 fused_ordering(843) 00:14:09.930 fused_ordering(844) 00:14:09.930 fused_ordering(845) 00:14:09.930 fused_ordering(846) 00:14:09.930 fused_ordering(847) 00:14:09.930 fused_ordering(848) 00:14:09.930 fused_ordering(849) 00:14:09.930 fused_ordering(850) 00:14:09.930 fused_ordering(851) 00:14:09.930 fused_ordering(852) 00:14:09.930 fused_ordering(853) 00:14:09.930 fused_ordering(854) 00:14:09.930 fused_ordering(855) 00:14:09.930 fused_ordering(856) 00:14:09.930 fused_ordering(857) 00:14:09.930 fused_ordering(858) 00:14:09.930 fused_ordering(859) 00:14:09.930 fused_ordering(860) 00:14:09.930 fused_ordering(861) 00:14:09.930 fused_ordering(862) 00:14:09.930 fused_ordering(863) 00:14:09.930 fused_ordering(864) 00:14:09.930 fused_ordering(865) 00:14:09.930 fused_ordering(866) 00:14:09.930 fused_ordering(867) 00:14:09.930 fused_ordering(868) 00:14:09.930 fused_ordering(869) 00:14:09.930 fused_ordering(870) 00:14:09.930 fused_ordering(871) 00:14:09.930 fused_ordering(872) 00:14:09.930 fused_ordering(873) 00:14:09.930 fused_ordering(874) 00:14:09.930 fused_ordering(875) 00:14:09.930 fused_ordering(876) 00:14:09.930 fused_ordering(877) 00:14:09.930 fused_ordering(878) 00:14:09.930 fused_ordering(879) 00:14:09.930 fused_ordering(880) 00:14:09.930 fused_ordering(881) 00:14:09.930 fused_ordering(882) 00:14:09.930 fused_ordering(883) 00:14:09.930 fused_ordering(884) 00:14:09.930 fused_ordering(885) 00:14:09.930 fused_ordering(886) 00:14:09.930 fused_ordering(887) 00:14:09.930 fused_ordering(888) 00:14:09.930 fused_ordering(889) 00:14:09.930 fused_ordering(890) 00:14:09.930 fused_ordering(891) 00:14:09.930 fused_ordering(892) 00:14:09.930 fused_ordering(893) 00:14:09.930 fused_ordering(894) 00:14:09.930 fused_ordering(895) 00:14:09.930 fused_ordering(896) 00:14:09.930 fused_ordering(897) 00:14:09.930 fused_ordering(898) 00:14:09.930 fused_ordering(899) 00:14:09.930 fused_ordering(900) 00:14:09.930 fused_ordering(901) 00:14:09.930 fused_ordering(902) 00:14:09.930 fused_ordering(903) 00:14:09.930 fused_ordering(904) 00:14:09.930 fused_ordering(905) 00:14:09.930 fused_ordering(906) 00:14:09.930 fused_ordering(907) 00:14:09.930 fused_ordering(908) 00:14:09.930 fused_ordering(909) 00:14:09.930 fused_ordering(910) 00:14:09.930 fused_ordering(911) 00:14:09.930 fused_ordering(912) 00:14:09.930 fused_ordering(913) 00:14:09.930 fused_ordering(914) 00:14:09.930 fused_ordering(915) 00:14:09.930 fused_ordering(916) 00:14:09.930 fused_ordering(917) 00:14:09.930 fused_ordering(918) 00:14:09.930 fused_ordering(919) 00:14:09.930 fused_ordering(920) 00:14:09.930 fused_ordering(921) 00:14:09.930 fused_ordering(922) 00:14:09.930 fused_ordering(923) 00:14:09.930 fused_ordering(924) 00:14:09.930 fused_ordering(925) 00:14:09.930 fused_ordering(926) 00:14:09.930 fused_ordering(927) 00:14:09.930 fused_ordering(928) 00:14:09.930 fused_ordering(929) 00:14:09.930 fused_ordering(930) 00:14:09.930 fused_ordering(931) 00:14:09.930 fused_ordering(932) 00:14:09.930 fused_ordering(933) 00:14:09.930 fused_ordering(934) 00:14:09.930 fused_ordering(935) 00:14:09.930 fused_ordering(936) 00:14:09.930 fused_ordering(937) 00:14:09.930 fused_ordering(938) 00:14:09.930 fused_ordering(939) 00:14:09.930 fused_ordering(940) 00:14:09.930 fused_ordering(941) 00:14:09.930 fused_ordering(942) 00:14:09.930 fused_ordering(943) 00:14:09.930 fused_ordering(944) 00:14:09.930 fused_ordering(945) 00:14:09.930 fused_ordering(946) 00:14:09.930 fused_ordering(947) 00:14:09.930 fused_ordering(948) 00:14:09.930 fused_ordering(949) 00:14:09.930 fused_ordering(950) 00:14:09.930 fused_ordering(951) 00:14:09.930 fused_ordering(952) 00:14:09.930 fused_ordering(953) 00:14:09.930 fused_ordering(954) 00:14:09.930 fused_ordering(955) 00:14:09.930 fused_ordering(956) 00:14:09.930 fused_ordering(957) 00:14:09.930 fused_ordering(958) 00:14:09.930 fused_ordering(959) 00:14:09.931 fused_ordering(960) 00:14:09.931 fused_ordering(961) 00:14:09.931 fused_ordering(962) 00:14:09.931 fused_ordering(963) 00:14:09.931 fused_ordering(964) 00:14:09.931 fused_ordering(965) 00:14:09.931 fused_ordering(966) 00:14:09.931 fused_ordering(967) 00:14:09.931 fused_ordering(968) 00:14:09.931 fused_ordering(969) 00:14:09.931 fused_ordering(970) 00:14:09.931 fused_ordering(971) 00:14:09.931 fused_ordering(972) 00:14:09.931 fused_ordering(973) 00:14:09.931 fused_ordering(974) 00:14:09.931 fused_ordering(975) 00:14:09.931 fused_ordering(976) 00:14:09.931 fused_ordering(977) 00:14:09.931 fused_ordering(978) 00:14:09.931 fused_ordering(979) 00:14:09.931 fused_ordering(980) 00:14:09.931 fused_ordering(981) 00:14:09.931 fused_ordering(982) 00:14:09.931 fused_ordering(983) 00:14:09.931 fused_ordering(984) 00:14:09.931 fused_ordering(985) 00:14:09.931 fused_ordering(986) 00:14:09.931 fused_ordering(987) 00:14:09.931 fused_ordering(988) 00:14:09.931 fused_ordering(989) 00:14:09.931 fused_ordering(990) 00:14:09.931 fused_ordering(991) 00:14:09.931 fused_ordering(992) 00:14:09.931 fused_ordering(993) 00:14:09.931 fused_ordering(994) 00:14:09.931 fused_ordering(995) 00:14:09.931 fused_ordering(996) 00:14:09.931 fused_ordering(997) 00:14:09.931 fused_ordering(998) 00:14:09.931 fused_ordering(999) 00:14:09.931 fused_ordering(1000) 00:14:09.931 fused_ordering(1001) 00:14:09.931 fused_ordering(1002) 00:14:09.931 fused_ordering(1003) 00:14:09.931 fused_ordering(1004) 00:14:09.931 fused_ordering(1005) 00:14:09.931 fused_ordering(1006) 00:14:09.931 fused_ordering(1007) 00:14:09.931 fused_ordering(1008) 00:14:09.931 fused_ordering(1009) 00:14:09.931 fused_ordering(1010) 00:14:09.931 fused_ordering(1011) 00:14:09.931 fused_ordering(1012) 00:14:09.931 fused_ordering(1013) 00:14:09.931 fused_ordering(1014) 00:14:09.931 fused_ordering(1015) 00:14:09.931 fused_ordering(1016) 00:14:09.931 fused_ordering(1017) 00:14:09.931 fused_ordering(1018) 00:14:09.931 fused_ordering(1019) 00:14:09.931 fused_ordering(1020) 00:14:09.931 fused_ordering(1021) 00:14:09.931 fused_ordering(1022) 00:14:09.931 fused_ordering(1023) 00:14:09.931 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:09.931 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:09.931 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:09.931 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:09.931 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:09.931 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:09.931 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:09.931 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:09.931 rmmod nvme_tcp 00:14:09.931 rmmod nvme_fabrics 00:14:09.931 rmmod nvme_keyring 00:14:09.931 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:09.931 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:09.931 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:09.931 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2669329 ']' 00:14:09.931 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2669329 00:14:09.931 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2669329 ']' 00:14:09.931 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2669329 00:14:09.931 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:14:09.931 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:09.931 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2669329 00:14:09.931 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:09.931 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:09.931 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2669329' 00:14:09.931 killing process with pid 2669329 00:14:09.931 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2669329 00:14:09.931 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2669329 00:14:09.931 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:09.931 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:09.931 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:09.931 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:09.931 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:14:09.931 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:09.931 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:14:09.931 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:09.931 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:09.931 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.931 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:09.931 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.477 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:12.478 00:14:12.478 real 0m13.592s 00:14:12.478 user 0m7.256s 00:14:12.478 sys 0m7.254s 00:14:12.478 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:12.478 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:12.478 ************************************ 00:14:12.478 END TEST nvmf_fused_ordering 00:14:12.478 ************************************ 00:14:12.478 11:15:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:12.478 11:15:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:12.478 11:15:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:12.478 11:15:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:12.478 ************************************ 00:14:12.478 START TEST nvmf_ns_masking 00:14:12.478 ************************************ 00:14:12.478 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:12.478 * Looking for test storage... 00:14:12.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:12.478 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:12.478 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:14:12.478 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:12.478 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:12.478 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:12.478 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:12.478 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:12.478 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:12.478 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:12.478 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:12.478 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:12.478 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:12.478 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:12.478 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:12.478 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:12.478 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:12.478 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:12.478 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:12.478 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:12.478 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:12.478 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:12.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.478 --rc genhtml_branch_coverage=1 00:14:12.478 --rc genhtml_function_coverage=1 00:14:12.478 --rc genhtml_legend=1 00:14:12.478 --rc geninfo_all_blocks=1 00:14:12.478 --rc geninfo_unexecuted_blocks=1 00:14:12.478 00:14:12.478 ' 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:12.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.478 --rc genhtml_branch_coverage=1 00:14:12.478 --rc genhtml_function_coverage=1 00:14:12.478 --rc genhtml_legend=1 00:14:12.478 --rc geninfo_all_blocks=1 00:14:12.478 --rc geninfo_unexecuted_blocks=1 00:14:12.478 00:14:12.478 ' 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:12.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.478 --rc genhtml_branch_coverage=1 00:14:12.478 --rc genhtml_function_coverage=1 00:14:12.478 --rc genhtml_legend=1 00:14:12.478 --rc geninfo_all_blocks=1 00:14:12.478 --rc geninfo_unexecuted_blocks=1 00:14:12.478 00:14:12.478 ' 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:12.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.478 --rc genhtml_branch_coverage=1 00:14:12.478 --rc genhtml_function_coverage=1 00:14:12.478 --rc genhtml_legend=1 00:14:12.478 --rc geninfo_all_blocks=1 00:14:12.478 --rc geninfo_unexecuted_blocks=1 00:14:12.478 00:14:12.478 ' 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:12.478 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:12.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=260fb77a-66b8-486a-9ecf-76356a13703e 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=10121a1b-66b2-4784-91a4-e7d0994dd814 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=4d3a2862-709b-4ac3-a44a-351ad7248316 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:12.479 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:20.614 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:20.614 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:20.614 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:20.614 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:20.614 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:20.614 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:20.614 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:20.614 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:20.614 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:20.614 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:20.614 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:20.614 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:20.614 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:20.614 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:20.614 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:20.614 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:20.614 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:20.614 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:20.614 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:20.615 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:20.615 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:20.615 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:20.615 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:20.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:20.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:14:20.615 00:14:20.615 --- 10.0.0.2 ping statistics --- 00:14:20.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.615 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:20.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:20.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:14:20.615 00:14:20.615 --- 10.0.0.1 ping statistics --- 00:14:20.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.615 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2674773 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2674773 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2674773 ']' 00:14:20.615 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.616 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:20.616 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.616 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:20.616 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:20.616 [2024-11-20 11:15:12.684895] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:14:20.616 [2024-11-20 11:15:12.684959] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.616 [2024-11-20 11:15:12.787195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.616 [2024-11-20 11:15:12.837643] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:20.616 [2024-11-20 11:15:12.837702] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:20.616 [2024-11-20 11:15:12.837711] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:20.616 [2024-11-20 11:15:12.837718] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:20.616 [2024-11-20 11:15:12.837724] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:20.616 [2024-11-20 11:15:12.838549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.876 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:20.876 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:20.876 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:20.876 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:20.876 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:20.876 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:20.876 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:21.136 [2024-11-20 11:15:13.717263] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:21.136 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:21.136 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:21.136 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:21.395 Malloc1 00:14:21.396 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:21.656 Malloc2 00:14:21.656 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:21.656 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:21.916 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:22.177 [2024-11-20 11:15:14.747355] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:22.177 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:22.177 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4d3a2862-709b-4ac3-a44a-351ad7248316 -a 10.0.0.2 -s 4420 -i 4 00:14:22.437 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:22.437 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:22.437 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:22.437 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:22.437 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:24.349 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:24.349 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:24.349 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:24.349 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:24.349 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:24.349 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:24.349 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:24.349 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:24.349 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:24.349 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:24.349 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:24.349 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:24.349 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:24.349 [ 0]:0x1 00:14:24.349 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:24.349 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:24.610 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8880370ea9c54fa98e394a1aece3fd82 00:14:24.610 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8880370ea9c54fa98e394a1aece3fd82 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:24.610 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:24.610 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:24.610 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:24.610 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:24.610 [ 0]:0x1 00:14:24.610 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:24.610 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:24.871 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8880370ea9c54fa98e394a1aece3fd82 00:14:24.871 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8880370ea9c54fa98e394a1aece3fd82 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:24.871 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:24.871 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:24.871 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:24.871 [ 1]:0x2 00:14:24.871 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:24.871 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:24.871 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3f008f3781874c05ac67b40b0ac59d12 00:14:24.871 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3f008f3781874c05ac67b40b0ac59d12 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:24.871 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:24.871 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:24.871 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.871 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.132 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:25.392 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:25.392 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4d3a2862-709b-4ac3-a44a-351ad7248316 -a 10.0.0.2 -s 4420 -i 4 00:14:25.392 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:25.392 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:25.392 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:25.392 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:14:25.392 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:14:25.392 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:27.935 [ 0]:0x2 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3f008f3781874c05ac67b40b0ac59d12 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3f008f3781874c05ac67b40b0ac59d12 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:27.935 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:27.935 [ 0]:0x1 00:14:27.936 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:27.936 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:27.936 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8880370ea9c54fa98e394a1aece3fd82 00:14:27.936 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8880370ea9c54fa98e394a1aece3fd82 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:27.936 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:27.936 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:27.936 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:27.936 [ 1]:0x2 00:14:27.936 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:27.936 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:27.936 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3f008f3781874c05ac67b40b0ac59d12 00:14:27.936 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3f008f3781874c05ac67b40b0ac59d12 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:27.936 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:28.196 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:28.196 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:28.196 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:28.196 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:28.196 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:28.196 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:28.196 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:28.196 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:28.196 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.196 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:28.196 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:28.196 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:28.196 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:28.196 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:28.196 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:28.196 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:28.196 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:28.196 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:28.196 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:28.196 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:28.196 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.196 [ 0]:0x2 00:14:28.196 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:28.196 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:28.196 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3f008f3781874c05ac67b40b0ac59d12 00:14:28.196 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3f008f3781874c05ac67b40b0ac59d12 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:28.196 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:28.196 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:28.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.456 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:28.456 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:28.456 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4d3a2862-709b-4ac3-a44a-351ad7248316 -a 10.0.0.2 -s 4420 -i 4 00:14:28.716 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:28.716 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:28.716 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:28.716 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:28.716 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:28.716 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:30.626 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:30.626 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:30.626 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:30.626 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:30.626 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:30.626 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:30.626 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:30.626 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:30.885 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:30.885 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:30.885 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:30.885 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:30.885 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:30.885 [ 0]:0x1 00:14:30.885 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:30.885 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:30.885 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8880370ea9c54fa98e394a1aece3fd82 00:14:30.885 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8880370ea9c54fa98e394a1aece3fd82 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:30.885 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:30.885 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:30.885 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:30.885 [ 1]:0x2 00:14:31.145 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:31.145 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:31.145 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3f008f3781874c05ac67b40b0ac59d12 00:14:31.145 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3f008f3781874c05ac67b40b0ac59d12 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:31.145 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:31.145 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:31.145 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:31.145 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:31.145 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:31.145 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.145 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:31.145 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.145 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:31.145 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:31.145 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:31.145 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:31.145 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:31.406 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:31.406 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:31.406 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:31.406 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:31.406 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:31.406 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:31.406 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:31.406 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:31.406 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:31.406 [ 0]:0x2 00:14:31.406 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:31.406 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:31.406 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3f008f3781874c05ac67b40b0ac59d12 00:14:31.406 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3f008f3781874c05ac67b40b0ac59d12 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:31.406 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:31.406 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:31.406 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:31.406 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:31.406 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.406 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:31.406 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.406 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:31.406 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.406 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:31.406 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:31.406 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:31.406 [2024-11-20 11:15:24.141624] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:31.666 request: 00:14:31.666 { 00:14:31.666 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:31.666 "nsid": 2, 00:14:31.666 "host": "nqn.2016-06.io.spdk:host1", 00:14:31.666 "method": "nvmf_ns_remove_host", 00:14:31.666 "req_id": 1 00:14:31.666 } 00:14:31.666 Got JSON-RPC error response 00:14:31.666 response: 00:14:31.666 { 00:14:31.666 "code": -32602, 00:14:31.666 "message": "Invalid parameters" 00:14:31.666 } 00:14:31.666 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:31.666 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:31.666 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:31.666 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:31.666 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:31.666 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:31.666 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:31.666 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:31.666 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.666 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:31.666 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.666 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:31.666 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:31.666 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:31.666 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:31.666 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:31.666 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:31.666 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:31.666 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:31.666 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:31.666 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:31.666 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:31.666 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:31.666 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:31.666 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:31.666 [ 0]:0x2 00:14:31.666 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:31.666 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:31.666 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3f008f3781874c05ac67b40b0ac59d12 00:14:31.666 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3f008f3781874c05ac67b40b0ac59d12 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:31.666 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:31.666 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:31.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.667 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2677095 00:14:31.667 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:31.667 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:31.667 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2677095 /var/tmp/host.sock 00:14:31.667 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2677095 ']' 00:14:31.667 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:31.667 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:31.667 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:31.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:31.667 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:31.667 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:31.667 [2024-11-20 11:15:24.382538] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:14:31.667 [2024-11-20 11:15:24.382589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2677095 ] 00:14:31.926 [2024-11-20 11:15:24.469414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.926 [2024-11-20 11:15:24.505569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.495 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:32.495 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:32.495 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.754 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:33.014 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 260fb77a-66b8-486a-9ecf-76356a13703e 00:14:33.014 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:33.014 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 260FB77A66B8486A9ECF76356A13703E -i 00:14:33.014 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 10121a1b-66b2-4784-91a4-e7d0994dd814 00:14:33.014 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:33.014 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 10121A1B66B2478491A4E7D0994DD814 -i 00:14:33.274 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:33.533 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:33.793 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:33.793 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:34.054 nvme0n1 00:14:34.054 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:34.054 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:34.320 nvme1n2 00:14:34.320 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:34.320 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:34.320 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:34.320 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:34.320 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:34.320 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:34.320 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:34.320 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:34.320 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:34.668 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 260fb77a-66b8-486a-9ecf-76356a13703e == \2\6\0\f\b\7\7\a\-\6\6\b\8\-\4\8\6\a\-\9\e\c\f\-\7\6\3\5\6\a\1\3\7\0\3\e ]] 00:14:34.668 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:34.668 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:34.668 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:34.930 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 10121a1b-66b2-4784-91a4-e7d0994dd814 == \1\0\1\2\1\a\1\b\-\6\6\b\2\-\4\7\8\4\-\9\1\a\4\-\e\7\d\0\9\9\4\d\d\8\1\4 ]] 00:14:34.930 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:34.930 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:35.190 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 260fb77a-66b8-486a-9ecf-76356a13703e 00:14:35.190 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:35.190 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 260FB77A66B8486A9ECF76356A13703E 00:14:35.190 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:35.190 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 260FB77A66B8486A9ECF76356A13703E 00:14:35.190 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:35.190 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:35.190 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:35.190 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:35.190 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:35.190 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:35.190 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:35.190 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:35.190 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 260FB77A66B8486A9ECF76356A13703E 00:14:35.450 [2024-11-20 11:15:27.943571] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:35.450 [2024-11-20 11:15:27.943600] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:35.450 [2024-11-20 11:15:27.943606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.450 request: 00:14:35.450 { 00:14:35.450 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:35.450 "namespace": { 00:14:35.450 "bdev_name": "invalid", 00:14:35.450 "nsid": 1, 00:14:35.450 "nguid": "260FB77A66B8486A9ECF76356A13703E", 00:14:35.450 "no_auto_visible": false 00:14:35.450 }, 00:14:35.450 "method": "nvmf_subsystem_add_ns", 00:14:35.450 "req_id": 1 00:14:35.450 } 00:14:35.450 Got JSON-RPC error response 00:14:35.450 response: 00:14:35.450 { 00:14:35.450 "code": -32602, 00:14:35.450 "message": "Invalid parameters" 00:14:35.450 } 00:14:35.450 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:35.450 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:35.450 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:35.450 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:35.450 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 260fb77a-66b8-486a-9ecf-76356a13703e 00:14:35.450 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:35.450 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 260FB77A66B8486A9ECF76356A13703E -i 00:14:35.450 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:37.992 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:37.992 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:37.992 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:37.992 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:37.992 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2677095 00:14:37.992 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2677095 ']' 00:14:37.992 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2677095 00:14:37.992 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:37.992 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:37.992 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2677095 00:14:37.992 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:37.992 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:37.992 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2677095' 00:14:37.992 killing process with pid 2677095 00:14:37.992 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2677095 00:14:37.992 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2677095 00:14:37.992 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:38.253 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:38.253 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:38.253 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:38.253 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:38.253 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:38.253 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:38.253 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:38.253 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:38.253 rmmod nvme_tcp 00:14:38.253 rmmod nvme_fabrics 00:14:38.253 rmmod nvme_keyring 00:14:38.253 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:38.253 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:38.253 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:38.253 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2674773 ']' 00:14:38.253 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2674773 00:14:38.253 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2674773 ']' 00:14:38.253 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2674773 00:14:38.253 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:38.253 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:38.253 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2674773 00:14:38.254 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:38.254 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:38.254 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2674773' 00:14:38.254 killing process with pid 2674773 00:14:38.254 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2674773 00:14:38.254 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2674773 00:14:38.515 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:38.515 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:38.515 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:38.515 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:38.515 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:14:38.515 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:38.515 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:14:38.515 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:38.515 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:38.515 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.515 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:38.515 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.431 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:40.431 00:14:40.431 real 0m28.305s 00:14:40.431 user 0m32.183s 00:14:40.431 sys 0m8.317s 00:14:40.431 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:40.431 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:40.431 ************************************ 00:14:40.431 END TEST nvmf_ns_masking 00:14:40.431 ************************************ 00:14:40.431 11:15:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:40.431 11:15:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:40.431 11:15:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:40.431 11:15:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:40.431 11:15:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:40.693 ************************************ 00:14:40.693 START TEST nvmf_nvme_cli 00:14:40.693 ************************************ 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:40.693 * Looking for test storage... 00:14:40.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:40.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.693 --rc genhtml_branch_coverage=1 00:14:40.693 --rc genhtml_function_coverage=1 00:14:40.693 --rc genhtml_legend=1 00:14:40.693 --rc geninfo_all_blocks=1 00:14:40.693 --rc geninfo_unexecuted_blocks=1 00:14:40.693 00:14:40.693 ' 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:40.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.693 --rc genhtml_branch_coverage=1 00:14:40.693 --rc genhtml_function_coverage=1 00:14:40.693 --rc genhtml_legend=1 00:14:40.693 --rc geninfo_all_blocks=1 00:14:40.693 --rc geninfo_unexecuted_blocks=1 00:14:40.693 00:14:40.693 ' 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:40.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.693 --rc genhtml_branch_coverage=1 00:14:40.693 --rc genhtml_function_coverage=1 00:14:40.693 --rc genhtml_legend=1 00:14:40.693 --rc geninfo_all_blocks=1 00:14:40.693 --rc geninfo_unexecuted_blocks=1 00:14:40.693 00:14:40.693 ' 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:40.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.693 --rc genhtml_branch_coverage=1 00:14:40.693 --rc genhtml_function_coverage=1 00:14:40.693 --rc genhtml_legend=1 00:14:40.693 --rc geninfo_all_blocks=1 00:14:40.693 --rc geninfo_unexecuted_blocks=1 00:14:40.693 00:14:40.693 ' 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:40.693 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:40.955 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:40.955 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:40.955 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:40.955 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.955 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.955 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.955 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:40.955 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.955 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:40.955 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:40.955 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:40.955 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:40.955 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:40.955 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:40.955 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:40.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:40.955 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:40.955 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:40.955 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:40.955 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:40.955 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:40.955 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:40.955 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:40.955 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:40.955 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:40.955 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:40.955 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:40.955 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:40.955 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.955 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:40.955 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.955 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:40.955 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:40.955 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:40.955 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:49.139 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:49.139 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:49.140 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:49.140 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:49.140 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:49.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:49.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.546 ms 00:14:49.140 00:14:49.140 --- 10.0.0.2 ping statistics --- 00:14:49.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.140 rtt min/avg/max/mdev = 0.546/0.546/0.546/0.000 ms 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:49.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:49.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:14:49.140 00:14:49.140 --- 10.0.0.1 ping statistics --- 00:14:49.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.140 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2682734 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2682734 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2682734 ']' 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:49.140 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.140 [2024-11-20 11:15:40.971134] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:14:49.140 [2024-11-20 11:15:40.971209] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.140 [2024-11-20 11:15:41.073661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:49.140 [2024-11-20 11:15:41.129304] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.140 [2024-11-20 11:15:41.129357] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.140 [2024-11-20 11:15:41.129366] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:49.140 [2024-11-20 11:15:41.129373] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:49.140 [2024-11-20 11:15:41.129380] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.140 [2024-11-20 11:15:41.131262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:49.140 [2024-11-20 11:15:41.131419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:49.140 [2024-11-20 11:15:41.131582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:49.140 [2024-11-20 11:15:41.131582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.140 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:49.141 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:14:49.141 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:49.141 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:49.141 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.141 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:49.141 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:49.141 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.141 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.141 [2024-11-20 11:15:41.851688] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:49.141 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.141 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:49.141 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.141 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.402 Malloc0 00:14:49.402 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.402 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:49.402 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.402 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.402 Malloc1 00:14:49.402 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.402 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:49.402 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.402 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.402 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.402 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:49.402 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.402 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.402 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.402 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:49.402 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.402 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.402 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.402 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:49.402 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.402 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.402 [2024-11-20 11:15:41.964952] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:49.402 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.402 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:49.402 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.402 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.402 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.402 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:14:49.664 00:14:49.664 Discovery Log Number of Records 2, Generation counter 2 00:14:49.664 =====Discovery Log Entry 0====== 00:14:49.664 trtype: tcp 00:14:49.664 adrfam: ipv4 00:14:49.664 subtype: current discovery subsystem 00:14:49.664 treq: not required 00:14:49.664 portid: 0 00:14:49.664 trsvcid: 4420 00:14:49.664 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:49.664 traddr: 10.0.0.2 00:14:49.664 eflags: explicit discovery connections, duplicate discovery information 00:14:49.664 sectype: none 00:14:49.664 =====Discovery Log Entry 1====== 00:14:49.664 trtype: tcp 00:14:49.664 adrfam: ipv4 00:14:49.664 subtype: nvme subsystem 00:14:49.664 treq: not required 00:14:49.664 portid: 0 00:14:49.664 trsvcid: 4420 00:14:49.664 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:49.664 traddr: 10.0.0.2 00:14:49.664 eflags: none 00:14:49.664 sectype: none 00:14:49.664 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:49.664 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:49.664 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:49.664 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:49.664 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:49.664 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:49.664 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:49.664 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:49.664 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:49.664 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:49.664 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:51.050 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:51.050 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:14:51.050 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:51.050 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:51.050 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:51.050 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:14:53.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:53.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:53.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:53.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:53.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:53.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:14:53.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:53.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:53.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:53.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:53.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:53.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:53.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:53.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:53.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:53.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:53.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:53.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:53.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:53.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:53.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:53.591 /dev/nvme0n2 ]] 00:14:53.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:53.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:53.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:53.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:53.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:53.591 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:53.591 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:53.591 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:53.591 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:53.591 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:53.591 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:53.591 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:53.591 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:53.591 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:53.592 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:53.592 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:53.592 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:53.592 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.592 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:53.592 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:14:53.592 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:53.592 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:53.853 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:53.853 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:53.853 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:14:53.853 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:53.853 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:53.853 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.853 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:53.853 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.853 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:53.853 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:53.853 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:53.853 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:53.853 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:53.853 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:53.853 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:53.853 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:53.853 rmmod nvme_tcp 00:14:53.853 rmmod nvme_fabrics 00:14:53.853 rmmod nvme_keyring 00:14:53.853 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:53.853 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:53.853 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:53.853 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2682734 ']' 00:14:53.853 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2682734 00:14:53.853 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2682734 ']' 00:14:53.853 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2682734 00:14:53.853 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:14:53.853 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:53.853 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2682734 00:14:53.853 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:53.853 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:53.853 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2682734' 00:14:53.853 killing process with pid 2682734 00:14:53.853 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2682734 00:14:53.853 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2682734 00:14:54.113 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:54.113 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:54.113 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:54.113 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:54.113 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:54.113 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:54.113 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:54.113 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:54.113 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:54.113 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.113 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:54.113 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.023 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:56.023 00:14:56.023 real 0m15.509s 00:14:56.023 user 0m24.206s 00:14:56.023 sys 0m6.340s 00:14:56.023 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:56.023 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:56.023 ************************************ 00:14:56.023 END TEST nvmf_nvme_cli 00:14:56.023 ************************************ 00:14:56.023 11:15:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:56.023 11:15:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:56.023 11:15:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:56.023 11:15:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:56.023 11:15:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:56.284 ************************************ 00:14:56.284 START TEST nvmf_vfio_user 00:14:56.284 ************************************ 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:56.284 * Looking for test storage... 00:14:56.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:56.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.284 --rc genhtml_branch_coverage=1 00:14:56.284 --rc genhtml_function_coverage=1 00:14:56.284 --rc genhtml_legend=1 00:14:56.284 --rc geninfo_all_blocks=1 00:14:56.284 --rc geninfo_unexecuted_blocks=1 00:14:56.284 00:14:56.284 ' 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:56.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.284 --rc genhtml_branch_coverage=1 00:14:56.284 --rc genhtml_function_coverage=1 00:14:56.284 --rc genhtml_legend=1 00:14:56.284 --rc geninfo_all_blocks=1 00:14:56.284 --rc geninfo_unexecuted_blocks=1 00:14:56.284 00:14:56.284 ' 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:56.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.284 --rc genhtml_branch_coverage=1 00:14:56.284 --rc genhtml_function_coverage=1 00:14:56.284 --rc genhtml_legend=1 00:14:56.284 --rc geninfo_all_blocks=1 00:14:56.284 --rc geninfo_unexecuted_blocks=1 00:14:56.284 00:14:56.284 ' 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:56.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.284 --rc genhtml_branch_coverage=1 00:14:56.284 --rc genhtml_function_coverage=1 00:14:56.284 --rc genhtml_legend=1 00:14:56.284 --rc geninfo_all_blocks=1 00:14:56.284 --rc geninfo_unexecuted_blocks=1 00:14:56.284 00:14:56.284 ' 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.284 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.284 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:56.284 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:56.284 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.284 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.284 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:56.284 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:56.284 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:56.284 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:56.284 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.284 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.284 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.284 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.284 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.285 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.285 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:56.285 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.285 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:56.285 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:56.285 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:56.285 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:56.285 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.285 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.285 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:56.285 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:56.285 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:56.285 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:56.285 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:56.285 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:56.285 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:56.285 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:56.285 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:56.285 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:56.285 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:56.545 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:56.545 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:56.545 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:56.545 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:56.545 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2684303 00:14:56.545 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2684303' 00:14:56.545 Process pid: 2684303 00:14:56.545 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:56.545 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2684303 00:14:56.545 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:56.545 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2684303 ']' 00:14:56.545 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.545 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:56.545 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.545 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:56.545 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:56.545 [2024-11-20 11:15:49.080851] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:14:56.545 [2024-11-20 11:15:49.080910] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.545 [2024-11-20 11:15:49.162227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:56.545 [2024-11-20 11:15:49.193937] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.545 [2024-11-20 11:15:49.193969] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.545 [2024-11-20 11:15:49.193975] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.545 [2024-11-20 11:15:49.193979] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.545 [2024-11-20 11:15:49.193984] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.545 [2024-11-20 11:15:49.195436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.545 [2024-11-20 11:15:49.195597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:56.545 [2024-11-20 11:15:49.195744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.545 [2024-11-20 11:15:49.195746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:57.489 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:57.489 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:57.489 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:58.430 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:58.430 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:58.430 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:58.430 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:58.430 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:58.430 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:58.691 Malloc1 00:14:58.691 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:58.951 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:58.951 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:59.211 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:59.211 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:59.211 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:59.471 Malloc2 00:14:59.471 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:59.730 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:59.730 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:59.993 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:59.993 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:59.993 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:59.993 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:59.993 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:59.993 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:59.993 [2024-11-20 11:15:52.579683] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:14:59.993 [2024-11-20 11:15:52.579705] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2685015 ] 00:14:59.993 [2024-11-20 11:15:52.617248] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:59.993 [2024-11-20 11:15:52.619534] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:59.993 [2024-11-20 11:15:52.619553] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f85af99a000 00:14:59.993 [2024-11-20 11:15:52.620536] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:59.993 [2024-11-20 11:15:52.621533] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:59.993 [2024-11-20 11:15:52.622541] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:59.993 [2024-11-20 11:15:52.623547] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:59.993 [2024-11-20 11:15:52.624555] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:59.993 [2024-11-20 11:15:52.625560] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:59.993 [2024-11-20 11:15:52.626567] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:59.993 [2024-11-20 11:15:52.627571] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:59.993 [2024-11-20 11:15:52.628582] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:59.993 [2024-11-20 11:15:52.628589] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f85af98f000 00:14:59.993 [2024-11-20 11:15:52.629501] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:59.993 [2024-11-20 11:15:52.642449] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:59.993 [2024-11-20 11:15:52.642467] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:59.993 [2024-11-20 11:15:52.647680] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:59.993 [2024-11-20 11:15:52.647717] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:59.993 [2024-11-20 11:15:52.647775] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:59.993 [2024-11-20 11:15:52.647787] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:59.993 [2024-11-20 11:15:52.647791] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:59.993 [2024-11-20 11:15:52.648683] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:59.993 [2024-11-20 11:15:52.648691] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:59.993 [2024-11-20 11:15:52.648696] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:59.993 [2024-11-20 11:15:52.649687] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:59.993 [2024-11-20 11:15:52.649693] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:59.993 [2024-11-20 11:15:52.649698] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:59.993 [2024-11-20 11:15:52.650699] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:59.993 [2024-11-20 11:15:52.650706] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:59.993 [2024-11-20 11:15:52.651706] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:59.993 [2024-11-20 11:15:52.651713] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:59.993 [2024-11-20 11:15:52.651716] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:59.993 [2024-11-20 11:15:52.651721] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:59.993 [2024-11-20 11:15:52.651827] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:59.993 [2024-11-20 11:15:52.651832] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:59.993 [2024-11-20 11:15:52.651836] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:59.993 [2024-11-20 11:15:52.652714] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:59.993 [2024-11-20 11:15:52.653721] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:59.993 [2024-11-20 11:15:52.654731] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:59.993 [2024-11-20 11:15:52.655731] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:59.994 [2024-11-20 11:15:52.655783] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:59.994 [2024-11-20 11:15:52.656744] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:59.994 [2024-11-20 11:15:52.656750] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:59.994 [2024-11-20 11:15:52.656753] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:59.994 [2024-11-20 11:15:52.656768] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:59.994 [2024-11-20 11:15:52.656773] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:59.994 [2024-11-20 11:15:52.656783] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:59.994 [2024-11-20 11:15:52.656787] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:59.994 [2024-11-20 11:15:52.656789] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:59.994 [2024-11-20 11:15:52.656799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:59.994 [2024-11-20 11:15:52.656828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:59.994 [2024-11-20 11:15:52.656835] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:59.994 [2024-11-20 11:15:52.656838] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:59.994 [2024-11-20 11:15:52.656841] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:59.994 [2024-11-20 11:15:52.656845] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:59.994 [2024-11-20 11:15:52.656849] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:59.994 [2024-11-20 11:15:52.656852] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:59.994 [2024-11-20 11:15:52.656856] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:59.994 [2024-11-20 11:15:52.656863] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:59.994 [2024-11-20 11:15:52.656871] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:59.994 [2024-11-20 11:15:52.656881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:59.994 [2024-11-20 11:15:52.656889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.994 [2024-11-20 11:15:52.656895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.994 [2024-11-20 11:15:52.656901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.994 [2024-11-20 11:15:52.656907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.994 [2024-11-20 11:15:52.656910] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:59.994 [2024-11-20 11:15:52.656915] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:59.994 [2024-11-20 11:15:52.656921] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:59.994 [2024-11-20 11:15:52.656932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:59.994 [2024-11-20 11:15:52.656937] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:59.994 [2024-11-20 11:15:52.656941] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:59.994 [2024-11-20 11:15:52.656946] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:59.994 [2024-11-20 11:15:52.656950] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:59.994 [2024-11-20 11:15:52.656956] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:59.994 [2024-11-20 11:15:52.656963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:59.994 [2024-11-20 11:15:52.657007] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:59.994 [2024-11-20 11:15:52.657013] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:59.994 [2024-11-20 11:15:52.657018] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:59.994 [2024-11-20 11:15:52.657021] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:59.994 [2024-11-20 11:15:52.657024] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:59.994 [2024-11-20 11:15:52.657028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:59.994 [2024-11-20 11:15:52.657038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:59.994 [2024-11-20 11:15:52.657044] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:59.994 [2024-11-20 11:15:52.657050] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:59.994 [2024-11-20 11:15:52.657056] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:59.994 [2024-11-20 11:15:52.657061] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:59.994 [2024-11-20 11:15:52.657064] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:59.994 [2024-11-20 11:15:52.657067] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:59.994 [2024-11-20 11:15:52.657071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:59.994 [2024-11-20 11:15:52.657090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:59.994 [2024-11-20 11:15:52.657099] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:59.994 [2024-11-20 11:15:52.657104] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:59.994 [2024-11-20 11:15:52.657109] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:59.994 [2024-11-20 11:15:52.657112] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:59.994 [2024-11-20 11:15:52.657114] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:59.994 [2024-11-20 11:15:52.657118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:59.994 [2024-11-20 11:15:52.657131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:59.994 [2024-11-20 11:15:52.657137] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:59.994 [2024-11-20 11:15:52.657142] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:59.994 [2024-11-20 11:15:52.657147] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:59.994 [2024-11-20 11:15:52.657152] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:59.994 [2024-11-20 11:15:52.657155] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:59.994 [2024-11-20 11:15:52.657165] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:59.994 [2024-11-20 11:15:52.657169] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:59.994 [2024-11-20 11:15:52.657172] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:59.994 [2024-11-20 11:15:52.657175] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:59.994 [2024-11-20 11:15:52.657189] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:59.994 [2024-11-20 11:15:52.657200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:59.994 [2024-11-20 11:15:52.657208] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:59.994 [2024-11-20 11:15:52.657217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:59.994 [2024-11-20 11:15:52.657225] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:59.994 [2024-11-20 11:15:52.657232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:59.994 [2024-11-20 11:15:52.657239] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:59.994 [2024-11-20 11:15:52.657248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:59.994 [2024-11-20 11:15:52.657257] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:59.994 [2024-11-20 11:15:52.657260] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:59.994 [2024-11-20 11:15:52.657263] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:59.994 [2024-11-20 11:15:52.657265] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:59.995 [2024-11-20 11:15:52.657268] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:59.995 [2024-11-20 11:15:52.657272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:59.995 [2024-11-20 11:15:52.657277] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:59.995 [2024-11-20 11:15:52.657280] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:59.995 [2024-11-20 11:15:52.657283] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:59.995 [2024-11-20 11:15:52.657287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:59.995 [2024-11-20 11:15:52.657292] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:59.995 [2024-11-20 11:15:52.657295] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:59.995 [2024-11-20 11:15:52.657297] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:59.995 [2024-11-20 11:15:52.657302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:59.995 [2024-11-20 11:15:52.657307] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:59.995 [2024-11-20 11:15:52.657311] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:59.995 [2024-11-20 11:15:52.657313] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:59.995 [2024-11-20 11:15:52.657317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:59.995 [2024-11-20 11:15:52.657322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:59.995 [2024-11-20 11:15:52.657330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:59.995 [2024-11-20 11:15:52.657338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:59.995 [2024-11-20 11:15:52.657343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:59.995 ===================================================== 00:14:59.995 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:59.995 ===================================================== 00:14:59.995 Controller Capabilities/Features 00:14:59.995 ================================ 00:14:59.995 Vendor ID: 4e58 00:14:59.995 Subsystem Vendor ID: 4e58 00:14:59.995 Serial Number: SPDK1 00:14:59.995 Model Number: SPDK bdev Controller 00:14:59.995 Firmware Version: 25.01 00:14:59.995 Recommended Arb Burst: 6 00:14:59.995 IEEE OUI Identifier: 8d 6b 50 00:14:59.995 Multi-path I/O 00:14:59.995 May have multiple subsystem ports: Yes 00:14:59.995 May have multiple controllers: Yes 00:14:59.995 Associated with SR-IOV VF: No 00:14:59.995 Max Data Transfer Size: 131072 00:14:59.995 Max Number of Namespaces: 32 00:14:59.995 Max Number of I/O Queues: 127 00:14:59.995 NVMe Specification Version (VS): 1.3 00:14:59.995 NVMe Specification Version (Identify): 1.3 00:14:59.995 Maximum Queue Entries: 256 00:14:59.995 Contiguous Queues Required: Yes 00:14:59.995 Arbitration Mechanisms Supported 00:14:59.995 Weighted Round Robin: Not Supported 00:14:59.995 Vendor Specific: Not Supported 00:14:59.995 Reset Timeout: 15000 ms 00:14:59.995 Doorbell Stride: 4 bytes 00:14:59.995 NVM Subsystem Reset: Not Supported 00:14:59.995 Command Sets Supported 00:14:59.995 NVM Command Set: Supported 00:14:59.995 Boot Partition: Not Supported 00:14:59.995 Memory Page Size Minimum: 4096 bytes 00:14:59.995 Memory Page Size Maximum: 4096 bytes 00:14:59.995 Persistent Memory Region: Not Supported 00:14:59.995 Optional Asynchronous Events Supported 00:14:59.995 Namespace Attribute Notices: Supported 00:14:59.995 Firmware Activation Notices: Not Supported 00:14:59.995 ANA Change Notices: Not Supported 00:14:59.995 PLE Aggregate Log Change Notices: Not Supported 00:14:59.995 LBA Status Info Alert Notices: Not Supported 00:14:59.995 EGE Aggregate Log Change Notices: Not Supported 00:14:59.995 Normal NVM Subsystem Shutdown event: Not Supported 00:14:59.995 Zone Descriptor Change Notices: Not Supported 00:14:59.995 Discovery Log Change Notices: Not Supported 00:14:59.995 Controller Attributes 00:14:59.995 128-bit Host Identifier: Supported 00:14:59.995 Non-Operational Permissive Mode: Not Supported 00:14:59.995 NVM Sets: Not Supported 00:14:59.995 Read Recovery Levels: Not Supported 00:14:59.995 Endurance Groups: Not Supported 00:14:59.995 Predictable Latency Mode: Not Supported 00:14:59.995 Traffic Based Keep ALive: Not Supported 00:14:59.995 Namespace Granularity: Not Supported 00:14:59.995 SQ Associations: Not Supported 00:14:59.995 UUID List: Not Supported 00:14:59.995 Multi-Domain Subsystem: Not Supported 00:14:59.995 Fixed Capacity Management: Not Supported 00:14:59.995 Variable Capacity Management: Not Supported 00:14:59.995 Delete Endurance Group: Not Supported 00:14:59.995 Delete NVM Set: Not Supported 00:14:59.995 Extended LBA Formats Supported: Not Supported 00:14:59.995 Flexible Data Placement Supported: Not Supported 00:14:59.995 00:14:59.995 Controller Memory Buffer Support 00:14:59.995 ================================ 00:14:59.995 Supported: No 00:14:59.995 00:14:59.995 Persistent Memory Region Support 00:14:59.995 ================================ 00:14:59.995 Supported: No 00:14:59.995 00:14:59.995 Admin Command Set Attributes 00:14:59.995 ============================ 00:14:59.995 Security Send/Receive: Not Supported 00:14:59.995 Format NVM: Not Supported 00:14:59.995 Firmware Activate/Download: Not Supported 00:14:59.995 Namespace Management: Not Supported 00:14:59.995 Device Self-Test: Not Supported 00:14:59.995 Directives: Not Supported 00:14:59.995 NVMe-MI: Not Supported 00:14:59.995 Virtualization Management: Not Supported 00:14:59.995 Doorbell Buffer Config: Not Supported 00:14:59.995 Get LBA Status Capability: Not Supported 00:14:59.995 Command & Feature Lockdown Capability: Not Supported 00:14:59.995 Abort Command Limit: 4 00:14:59.995 Async Event Request Limit: 4 00:14:59.995 Number of Firmware Slots: N/A 00:14:59.995 Firmware Slot 1 Read-Only: N/A 00:14:59.995 Firmware Activation Without Reset: N/A 00:14:59.995 Multiple Update Detection Support: N/A 00:14:59.995 Firmware Update Granularity: No Information Provided 00:14:59.995 Per-Namespace SMART Log: No 00:14:59.995 Asymmetric Namespace Access Log Page: Not Supported 00:14:59.995 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:59.995 Command Effects Log Page: Supported 00:14:59.995 Get Log Page Extended Data: Supported 00:14:59.995 Telemetry Log Pages: Not Supported 00:14:59.995 Persistent Event Log Pages: Not Supported 00:14:59.995 Supported Log Pages Log Page: May Support 00:14:59.995 Commands Supported & Effects Log Page: Not Supported 00:14:59.995 Feature Identifiers & Effects Log Page:May Support 00:14:59.995 NVMe-MI Commands & Effects Log Page: May Support 00:14:59.995 Data Area 4 for Telemetry Log: Not Supported 00:14:59.995 Error Log Page Entries Supported: 128 00:14:59.995 Keep Alive: Supported 00:14:59.995 Keep Alive Granularity: 10000 ms 00:14:59.995 00:14:59.995 NVM Command Set Attributes 00:14:59.995 ========================== 00:14:59.995 Submission Queue Entry Size 00:14:59.995 Max: 64 00:14:59.995 Min: 64 00:14:59.995 Completion Queue Entry Size 00:14:59.995 Max: 16 00:14:59.995 Min: 16 00:14:59.995 Number of Namespaces: 32 00:14:59.995 Compare Command: Supported 00:14:59.995 Write Uncorrectable Command: Not Supported 00:14:59.995 Dataset Management Command: Supported 00:14:59.995 Write Zeroes Command: Supported 00:14:59.995 Set Features Save Field: Not Supported 00:14:59.995 Reservations: Not Supported 00:14:59.995 Timestamp: Not Supported 00:14:59.995 Copy: Supported 00:14:59.995 Volatile Write Cache: Present 00:14:59.995 Atomic Write Unit (Normal): 1 00:14:59.995 Atomic Write Unit (PFail): 1 00:14:59.995 Atomic Compare & Write Unit: 1 00:14:59.995 Fused Compare & Write: Supported 00:14:59.995 Scatter-Gather List 00:14:59.995 SGL Command Set: Supported (Dword aligned) 00:14:59.995 SGL Keyed: Not Supported 00:14:59.995 SGL Bit Bucket Descriptor: Not Supported 00:14:59.995 SGL Metadata Pointer: Not Supported 00:14:59.995 Oversized SGL: Not Supported 00:14:59.995 SGL Metadata Address: Not Supported 00:14:59.995 SGL Offset: Not Supported 00:14:59.995 Transport SGL Data Block: Not Supported 00:14:59.995 Replay Protected Memory Block: Not Supported 00:14:59.995 00:14:59.995 Firmware Slot Information 00:14:59.995 ========================= 00:14:59.995 Active slot: 1 00:14:59.995 Slot 1 Firmware Revision: 25.01 00:14:59.995 00:14:59.995 00:14:59.995 Commands Supported and Effects 00:14:59.995 ============================== 00:14:59.995 Admin Commands 00:14:59.995 -------------- 00:14:59.995 Get Log Page (02h): Supported 00:14:59.995 Identify (06h): Supported 00:14:59.995 Abort (08h): Supported 00:14:59.996 Set Features (09h): Supported 00:14:59.996 Get Features (0Ah): Supported 00:14:59.996 Asynchronous Event Request (0Ch): Supported 00:14:59.996 Keep Alive (18h): Supported 00:14:59.996 I/O Commands 00:14:59.996 ------------ 00:14:59.996 Flush (00h): Supported LBA-Change 00:14:59.996 Write (01h): Supported LBA-Change 00:14:59.996 Read (02h): Supported 00:14:59.996 Compare (05h): Supported 00:14:59.996 Write Zeroes (08h): Supported LBA-Change 00:14:59.996 Dataset Management (09h): Supported LBA-Change 00:14:59.996 Copy (19h): Supported LBA-Change 00:14:59.996 00:14:59.996 Error Log 00:14:59.996 ========= 00:14:59.996 00:14:59.996 Arbitration 00:14:59.996 =========== 00:14:59.996 Arbitration Burst: 1 00:14:59.996 00:14:59.996 Power Management 00:14:59.996 ================ 00:14:59.996 Number of Power States: 1 00:14:59.996 Current Power State: Power State #0 00:14:59.996 Power State #0: 00:14:59.996 Max Power: 0.00 W 00:14:59.996 Non-Operational State: Operational 00:14:59.996 Entry Latency: Not Reported 00:14:59.996 Exit Latency: Not Reported 00:14:59.996 Relative Read Throughput: 0 00:14:59.996 Relative Read Latency: 0 00:14:59.996 Relative Write Throughput: 0 00:14:59.996 Relative Write Latency: 0 00:14:59.996 Idle Power: Not Reported 00:14:59.996 Active Power: Not Reported 00:14:59.996 Non-Operational Permissive Mode: Not Supported 00:14:59.996 00:14:59.996 Health Information 00:14:59.996 ================== 00:14:59.996 Critical Warnings: 00:14:59.996 Available Spare Space: OK 00:14:59.996 Temperature: OK 00:14:59.996 Device Reliability: OK 00:14:59.996 Read Only: No 00:14:59.996 Volatile Memory Backup: OK 00:14:59.996 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:59.996 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:59.996 Available Spare: 0% 00:14:59.996 Available Sp[2024-11-20 11:15:52.657417] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:59.996 [2024-11-20 11:15:52.657425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:59.996 [2024-11-20 11:15:52.657446] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:59.996 [2024-11-20 11:15:52.657453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.996 [2024-11-20 11:15:52.657457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.996 [2024-11-20 11:15:52.657461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.996 [2024-11-20 11:15:52.657466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.996 [2024-11-20 11:15:52.657758] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:59.996 [2024-11-20 11:15:52.657765] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:59.996 [2024-11-20 11:15:52.658757] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:59.996 [2024-11-20 11:15:52.658797] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:59.996 [2024-11-20 11:15:52.658802] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:59.996 [2024-11-20 11:15:52.659768] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:59.996 [2024-11-20 11:15:52.659776] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:59.996 [2024-11-20 11:15:52.659825] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:59.996 [2024-11-20 11:15:52.660790] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:59.996 are Threshold: 0% 00:14:59.996 Life Percentage Used: 0% 00:14:59.996 Data Units Read: 0 00:14:59.996 Data Units Written: 0 00:14:59.996 Host Read Commands: 0 00:14:59.996 Host Write Commands: 0 00:14:59.996 Controller Busy Time: 0 minutes 00:14:59.996 Power Cycles: 0 00:14:59.996 Power On Hours: 0 hours 00:14:59.996 Unsafe Shutdowns: 0 00:14:59.996 Unrecoverable Media Errors: 0 00:14:59.996 Lifetime Error Log Entries: 0 00:14:59.996 Warning Temperature Time: 0 minutes 00:14:59.996 Critical Temperature Time: 0 minutes 00:14:59.996 00:14:59.996 Number of Queues 00:14:59.996 ================ 00:14:59.996 Number of I/O Submission Queues: 127 00:14:59.996 Number of I/O Completion Queues: 127 00:14:59.996 00:14:59.996 Active Namespaces 00:14:59.996 ================= 00:14:59.996 Namespace ID:1 00:14:59.996 Error Recovery Timeout: Unlimited 00:14:59.996 Command Set Identifier: NVM (00h) 00:14:59.996 Deallocate: Supported 00:14:59.996 Deallocated/Unwritten Error: Not Supported 00:14:59.996 Deallocated Read Value: Unknown 00:14:59.996 Deallocate in Write Zeroes: Not Supported 00:14:59.996 Deallocated Guard Field: 0xFFFF 00:14:59.996 Flush: Supported 00:14:59.996 Reservation: Supported 00:14:59.996 Namespace Sharing Capabilities: Multiple Controllers 00:14:59.996 Size (in LBAs): 131072 (0GiB) 00:14:59.996 Capacity (in LBAs): 131072 (0GiB) 00:14:59.996 Utilization (in LBAs): 131072 (0GiB) 00:14:59.996 NGUID: F0C00E92325D4FABBB70DFD441B27E21 00:14:59.996 UUID: f0c00e92-325d-4fab-bb70-dfd441b27e21 00:14:59.996 Thin Provisioning: Not Supported 00:14:59.996 Per-NS Atomic Units: Yes 00:14:59.996 Atomic Boundary Size (Normal): 0 00:14:59.996 Atomic Boundary Size (PFail): 0 00:14:59.996 Atomic Boundary Offset: 0 00:14:59.996 Maximum Single Source Range Length: 65535 00:14:59.996 Maximum Copy Length: 65535 00:14:59.996 Maximum Source Range Count: 1 00:14:59.996 NGUID/EUI64 Never Reused: No 00:14:59.996 Namespace Write Protected: No 00:14:59.996 Number of LBA Formats: 1 00:14:59.996 Current LBA Format: LBA Format #00 00:14:59.996 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:59.996 00:14:59.996 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:00.257 [2024-11-20 11:15:52.829793] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:05.725 Initializing NVMe Controllers 00:15:05.725 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:05.725 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:05.725 Initialization complete. Launching workers. 00:15:05.725 ======================================================== 00:15:05.725 Latency(us) 00:15:05.725 Device Information : IOPS MiB/s Average min max 00:15:05.725 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40008.22 156.28 3199.00 848.08 9743.89 00:15:05.725 ======================================================== 00:15:05.725 Total : 40008.22 156.28 3199.00 848.08 9743.89 00:15:05.725 00:15:05.725 [2024-11-20 11:15:57.849766] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:05.725 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:05.725 [2024-11-20 11:15:58.044633] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:11.025 Initializing NVMe Controllers 00:15:11.025 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:11.025 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:11.025 Initialization complete. Launching workers. 00:15:11.025 ======================================================== 00:15:11.025 Latency(us) 00:15:11.025 Device Information : IOPS MiB/s Average min max 00:15:11.025 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16055.97 62.72 7977.65 5992.36 9970.86 00:15:11.025 ======================================================== 00:15:11.025 Total : 16055.97 62.72 7977.65 5992.36 9970.86 00:15:11.025 00:15:11.025 [2024-11-20 11:16:03.085249] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:11.025 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:11.025 [2024-11-20 11:16:03.283072] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:16.314 [2024-11-20 11:16:08.367439] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:16.314 Initializing NVMe Controllers 00:15:16.314 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:16.314 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:16.314 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:16.314 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:16.314 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:16.314 Initialization complete. Launching workers. 00:15:16.314 Starting thread on core 2 00:15:16.314 Starting thread on core 3 00:15:16.314 Starting thread on core 1 00:15:16.314 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:16.314 [2024-11-20 11:16:08.619489] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:19.613 [2024-11-20 11:16:11.687395] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:19.613 Initializing NVMe Controllers 00:15:19.613 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:19.613 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:19.613 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:19.613 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:19.613 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:19.613 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:19.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:19.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:19.613 Initialization complete. Launching workers. 00:15:19.613 Starting thread on core 1 with urgent priority queue 00:15:19.613 Starting thread on core 2 with urgent priority queue 00:15:19.613 Starting thread on core 3 with urgent priority queue 00:15:19.613 Starting thread on core 0 with urgent priority queue 00:15:19.613 SPDK bdev Controller (SPDK1 ) core 0: 10523.00 IO/s 9.50 secs/100000 ios 00:15:19.613 SPDK bdev Controller (SPDK1 ) core 1: 13172.67 IO/s 7.59 secs/100000 ios 00:15:19.613 SPDK bdev Controller (SPDK1 ) core 2: 10271.00 IO/s 9.74 secs/100000 ios 00:15:19.613 SPDK bdev Controller (SPDK1 ) core 3: 10448.00 IO/s 9.57 secs/100000 ios 00:15:19.613 ======================================================== 00:15:19.613 00:15:19.613 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:19.613 [2024-11-20 11:16:11.922457] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:19.613 Initializing NVMe Controllers 00:15:19.613 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:19.613 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:19.613 Namespace ID: 1 size: 0GB 00:15:19.613 Initialization complete. 00:15:19.613 INFO: using host memory buffer for IO 00:15:19.613 Hello world! 00:15:19.613 [2024-11-20 11:16:11.958679] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:19.613 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:19.613 [2024-11-20 11:16:12.198523] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:20.555 Initializing NVMe Controllers 00:15:20.555 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:20.555 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:20.555 Initialization complete. Launching workers. 00:15:20.555 submit (in ns) avg, min, max = 5260.1, 2821.7, 5993280.0 00:15:20.555 complete (in ns) avg, min, max = 18142.5, 1625.0, 4001750.0 00:15:20.555 00:15:20.555 Submit histogram 00:15:20.555 ================ 00:15:20.555 Range in us Cumulative Count 00:15:20.555 2.813 - 2.827: 0.0050% ( 1) 00:15:20.555 2.827 - 2.840: 0.7015% ( 140) 00:15:20.555 2.840 - 2.853: 2.2438% ( 310) 00:15:20.555 2.853 - 2.867: 5.3134% ( 617) 00:15:20.555 2.867 - 2.880: 10.0448% ( 951) 00:15:20.555 2.880 - 2.893: 16.1692% ( 1231) 00:15:20.555 2.893 - 2.907: 21.7512% ( 1122) 00:15:20.555 2.907 - 2.920: 28.7015% ( 1397) 00:15:20.555 2.920 - 2.933: 34.2438% ( 1114) 00:15:20.555 2.933 - 2.947: 39.4776% ( 1052) 00:15:20.555 2.947 - 2.960: 44.1841% ( 946) 00:15:20.555 2.960 - 2.973: 49.7413% ( 1117) 00:15:20.555 2.973 - 2.987: 56.5124% ( 1361) 00:15:20.555 2.987 - 3.000: 64.9055% ( 1687) 00:15:20.555 3.000 - 3.013: 74.1542% ( 1859) 00:15:20.555 3.013 - 3.027: 81.8109% ( 1539) 00:15:20.555 3.027 - 3.040: 88.2985% ( 1304) 00:15:20.555 3.040 - 3.053: 92.7313% ( 891) 00:15:20.555 3.053 - 3.067: 95.7960% ( 616) 00:15:20.555 3.067 - 3.080: 97.7114% ( 385) 00:15:20.555 3.080 - 3.093: 98.6766% ( 194) 00:15:20.555 3.093 - 3.107: 99.0746% ( 80) 00:15:20.555 3.107 - 3.120: 99.3234% ( 50) 00:15:20.555 3.120 - 3.133: 99.4478% ( 25) 00:15:20.555 3.133 - 3.147: 99.5274% ( 16) 00:15:20.555 3.147 - 3.160: 99.5721% ( 9) 00:15:20.555 3.160 - 3.173: 99.5871% ( 3) 00:15:20.555 3.200 - 3.213: 99.5920% ( 1) 00:15:20.555 3.347 - 3.360: 99.5970% ( 1) 00:15:20.555 3.400 - 3.413: 99.6020% ( 1) 00:15:20.555 3.440 - 3.467: 99.6070% ( 1) 00:15:20.555 3.493 - 3.520: 99.6119% ( 1) 00:15:20.555 3.547 - 3.573: 99.6169% ( 1) 00:15:20.555 3.600 - 3.627: 99.6219% ( 1) 00:15:20.555 3.653 - 3.680: 99.6269% ( 1) 00:15:20.555 3.680 - 3.707: 99.6318% ( 1) 00:15:20.555 3.947 - 3.973: 99.6368% ( 1) 00:15:20.555 3.973 - 4.000: 99.6468% ( 2) 00:15:20.555 4.053 - 4.080: 99.6517% ( 1) 00:15:20.555 4.507 - 4.533: 99.6567% ( 1) 00:15:20.555 4.560 - 4.587: 99.6617% ( 1) 00:15:20.555 4.853 - 4.880: 99.6667% ( 1) 00:15:20.555 4.907 - 4.933: 99.6766% ( 2) 00:15:20.555 4.987 - 5.013: 99.6816% ( 1) 00:15:20.555 5.040 - 5.067: 99.6915% ( 2) 00:15:20.555 5.093 - 5.120: 99.7015% ( 2) 00:15:20.555 5.147 - 5.173: 99.7065% ( 1) 00:15:20.555 5.253 - 5.280: 99.7114% ( 1) 00:15:20.555 5.280 - 5.307: 99.7164% ( 1) 00:15:20.555 5.333 - 5.360: 99.7214% ( 1) 00:15:20.555 5.547 - 5.573: 99.7264% ( 1) 00:15:20.555 5.627 - 5.653: 99.7313% ( 1) 00:15:20.555 5.680 - 5.707: 99.7363% ( 1) 00:15:20.555 5.760 - 5.787: 99.7463% ( 2) 00:15:20.555 5.787 - 5.813: 99.7612% ( 3) 00:15:20.555 5.840 - 5.867: 99.7662% ( 1) 00:15:20.555 5.867 - 5.893: 99.7761% ( 2) 00:15:20.555 5.920 - 5.947: 99.7910% ( 3) 00:15:20.555 5.973 - 6.000: 99.7960% ( 1) 00:15:20.555 6.000 - 6.027: 99.8109% ( 3) 00:15:20.555 6.027 - 6.053: 99.8159% ( 1) 00:15:20.555 6.107 - 6.133: 99.8209% ( 1) 00:15:20.555 6.133 - 6.160: 99.8259% ( 1) 00:15:20.555 6.160 - 6.187: 99.8507% ( 5) 00:15:20.555 6.187 - 6.213: 99.8607% ( 2) 00:15:20.555 6.213 - 6.240: 99.8657% ( 1) 00:15:20.555 6.373 - 6.400: 99.8856% ( 4) 00:15:20.555 6.427 - 6.453: 99.8905% ( 1) 00:15:20.555 6.453 - 6.480: 99.8955% ( 1) 00:15:20.555 6.480 - 6.507: 99.9005% ( 1) 00:15:20.555 6.507 - 6.533: 99.9055% ( 1) 00:15:20.555 6.533 - 6.560: 99.9204% ( 3) 00:15:20.555 6.560 - 6.587: 99.9254% ( 1) 00:15:20.555 6.587 - 6.613: 99.9303% ( 1) 00:15:20.555 6.693 - 6.720: 99.9353% ( 1) 00:15:20.555 6.747 - 6.773: 99.9403% ( 1) 00:15:20.555 6.827 - 6.880: 99.9453% ( 1) 00:15:20.555 3986.773 - 4014.080: 99.9950% ( 10) 00:15:20.555 5980.160 - 6007.467: 100.0000% ( 1) 00:15:20.555 00:15:20.555 [2024-11-20 11:16:13.217120] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:20.555 Complete histogram 00:15:20.555 ================== 00:15:20.555 Range in us Cumulative Count 00:15:20.555 1.620 - 1.627: 0.0050% ( 1) 00:15:20.555 1.633 - 1.640: 0.0299% ( 5) 00:15:20.555 1.640 - 1.647: 0.7711% ( 149) 00:15:20.555 1.647 - 1.653: 0.8657% ( 19) 00:15:20.555 1.653 - 1.660: 0.9502% ( 17) 00:15:20.555 1.660 - 1.667: 1.0149% ( 13) 00:15:20.555 1.667 - 1.673: 1.0348% ( 4) 00:15:20.555 1.673 - 1.680: 1.0547% ( 4) 00:15:20.555 1.687 - 1.693: 1.0647% ( 2) 00:15:20.555 1.693 - 1.700: 1.1443% ( 16) 00:15:20.555 1.700 - 1.707: 25.2786% ( 4851) 00:15:20.555 1.707 - 1.720: 55.1592% ( 6006) 00:15:20.555 1.720 - 1.733: 73.4478% ( 3676) 00:15:20.555 1.733 - 1.747: 81.5174% ( 1622) 00:15:20.555 1.747 - 1.760: 83.1692% ( 332) 00:15:20.555 1.760 - 1.773: 87.2836% ( 827) 00:15:20.555 1.773 - 1.787: 93.0149% ( 1152) 00:15:20.555 1.787 - 1.800: 96.8756% ( 776) 00:15:20.555 1.800 - 1.813: 98.6965% ( 366) 00:15:20.555 1.813 - 1.827: 99.2786% ( 117) 00:15:20.555 1.827 - 1.840: 99.3781% ( 20) 00:15:20.555 1.840 - 1.853: 99.3930% ( 3) 00:15:20.555 4.133 - 4.160: 99.3980% ( 1) 00:15:20.555 4.267 - 4.293: 99.4080% ( 2) 00:15:20.555 4.320 - 4.347: 99.4129% ( 1) 00:15:20.555 4.347 - 4.373: 99.4179% ( 1) 00:15:20.555 4.480 - 4.507: 99.4229% ( 1) 00:15:20.555 4.533 - 4.560: 99.4279% ( 1) 00:15:20.555 4.587 - 4.613: 99.4478% ( 4) 00:15:20.555 4.640 - 4.667: 99.4577% ( 2) 00:15:20.555 4.667 - 4.693: 99.4726% ( 3) 00:15:20.555 4.693 - 4.720: 99.4826% ( 2) 00:15:20.555 4.720 - 4.747: 99.4876% ( 1) 00:15:20.555 4.747 - 4.773: 99.4925% ( 1) 00:15:20.555 4.800 - 4.827: 99.4975% ( 1) 00:15:20.555 4.827 - 4.853: 99.5025% ( 1) 00:15:20.555 4.853 - 4.880: 99.5075% ( 1) 00:15:20.555 4.960 - 4.987: 99.5124% ( 1) 00:15:20.555 4.987 - 5.013: 99.5174% ( 1) 00:15:20.555 5.067 - 5.093: 99.5224% ( 1) 00:15:20.555 5.120 - 5.147: 99.5274% ( 1) 00:15:20.555 5.227 - 5.253: 99.5323% ( 1) 00:15:20.555 5.253 - 5.280: 99.5423% ( 2) 00:15:20.555 5.333 - 5.360: 99.5473% ( 1) 00:15:20.555 5.360 - 5.387: 99.5522% ( 1) 00:15:20.555 5.440 - 5.467: 99.5572% ( 1) 00:15:20.555 5.867 - 5.893: 99.5622% ( 1) 00:15:20.555 6.000 - 6.027: 99.5672% ( 1) 00:15:20.555 29.013 - 29.227: 99.5721% ( 1) 00:15:20.555 30.293 - 30.507: 99.5771% ( 1) 00:15:20.555 34.133 - 34.347: 99.5821% ( 1) 00:15:20.555 80.213 - 80.640: 99.5871% ( 1) 00:15:20.555 2129.920 - 2143.573: 99.5920% ( 1) 00:15:20.555 3986.773 - 4014.080: 100.0000% ( 82) 00:15:20.555 00:15:20.556 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:20.556 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:20.556 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:20.556 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:20.556 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:20.816 [ 00:15:20.816 { 00:15:20.816 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:20.816 "subtype": "Discovery", 00:15:20.816 "listen_addresses": [], 00:15:20.816 "allow_any_host": true, 00:15:20.816 "hosts": [] 00:15:20.816 }, 00:15:20.816 { 00:15:20.816 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:20.816 "subtype": "NVMe", 00:15:20.816 "listen_addresses": [ 00:15:20.816 { 00:15:20.816 "trtype": "VFIOUSER", 00:15:20.816 "adrfam": "IPv4", 00:15:20.816 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:20.816 "trsvcid": "0" 00:15:20.816 } 00:15:20.816 ], 00:15:20.816 "allow_any_host": true, 00:15:20.816 "hosts": [], 00:15:20.816 "serial_number": "SPDK1", 00:15:20.817 "model_number": "SPDK bdev Controller", 00:15:20.817 "max_namespaces": 32, 00:15:20.817 "min_cntlid": 1, 00:15:20.817 "max_cntlid": 65519, 00:15:20.817 "namespaces": [ 00:15:20.817 { 00:15:20.817 "nsid": 1, 00:15:20.817 "bdev_name": "Malloc1", 00:15:20.817 "name": "Malloc1", 00:15:20.817 "nguid": "F0C00E92325D4FABBB70DFD441B27E21", 00:15:20.817 "uuid": "f0c00e92-325d-4fab-bb70-dfd441b27e21" 00:15:20.817 } 00:15:20.817 ] 00:15:20.817 }, 00:15:20.817 { 00:15:20.817 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:20.817 "subtype": "NVMe", 00:15:20.817 "listen_addresses": [ 00:15:20.817 { 00:15:20.817 "trtype": "VFIOUSER", 00:15:20.817 "adrfam": "IPv4", 00:15:20.817 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:20.817 "trsvcid": "0" 00:15:20.817 } 00:15:20.817 ], 00:15:20.817 "allow_any_host": true, 00:15:20.817 "hosts": [], 00:15:20.817 "serial_number": "SPDK2", 00:15:20.817 "model_number": "SPDK bdev Controller", 00:15:20.817 "max_namespaces": 32, 00:15:20.817 "min_cntlid": 1, 00:15:20.817 "max_cntlid": 65519, 00:15:20.817 "namespaces": [ 00:15:20.817 { 00:15:20.817 "nsid": 1, 00:15:20.817 "bdev_name": "Malloc2", 00:15:20.817 "name": "Malloc2", 00:15:20.817 "nguid": "D73D3F6FD8C544B584D517FC17126916", 00:15:20.817 "uuid": "d73d3f6f-d8c5-44b5-84d5-17fc17126916" 00:15:20.817 } 00:15:20.817 ] 00:15:20.817 } 00:15:20.817 ] 00:15:20.817 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:20.817 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2689122 00:15:20.817 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:20.817 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:20.817 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:20.817 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:20.817 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:20.817 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:20.817 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:20.817 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:21.077 [2024-11-20 11:16:13.615522] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:21.077 Malloc3 00:15:21.077 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:21.077 [2024-11-20 11:16:13.793808] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:21.337 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:21.338 Asynchronous Event Request test 00:15:21.338 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:21.338 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:21.338 Registering asynchronous event callbacks... 00:15:21.338 Starting namespace attribute notice tests for all controllers... 00:15:21.338 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:21.338 aer_cb - Changed Namespace 00:15:21.338 Cleaning up... 00:15:21.338 [ 00:15:21.338 { 00:15:21.338 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:21.338 "subtype": "Discovery", 00:15:21.338 "listen_addresses": [], 00:15:21.338 "allow_any_host": true, 00:15:21.338 "hosts": [] 00:15:21.338 }, 00:15:21.338 { 00:15:21.338 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:21.338 "subtype": "NVMe", 00:15:21.338 "listen_addresses": [ 00:15:21.338 { 00:15:21.338 "trtype": "VFIOUSER", 00:15:21.338 "adrfam": "IPv4", 00:15:21.338 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:21.338 "trsvcid": "0" 00:15:21.338 } 00:15:21.338 ], 00:15:21.338 "allow_any_host": true, 00:15:21.338 "hosts": [], 00:15:21.338 "serial_number": "SPDK1", 00:15:21.338 "model_number": "SPDK bdev Controller", 00:15:21.338 "max_namespaces": 32, 00:15:21.338 "min_cntlid": 1, 00:15:21.338 "max_cntlid": 65519, 00:15:21.338 "namespaces": [ 00:15:21.338 { 00:15:21.338 "nsid": 1, 00:15:21.338 "bdev_name": "Malloc1", 00:15:21.338 "name": "Malloc1", 00:15:21.338 "nguid": "F0C00E92325D4FABBB70DFD441B27E21", 00:15:21.338 "uuid": "f0c00e92-325d-4fab-bb70-dfd441b27e21" 00:15:21.338 }, 00:15:21.338 { 00:15:21.338 "nsid": 2, 00:15:21.338 "bdev_name": "Malloc3", 00:15:21.338 "name": "Malloc3", 00:15:21.338 "nguid": "14F519BA8A8949CF9A5F0F66A5A5688E", 00:15:21.338 "uuid": "14f519ba-8a89-49cf-9a5f-0f66a5a5688e" 00:15:21.338 } 00:15:21.338 ] 00:15:21.338 }, 00:15:21.338 { 00:15:21.338 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:21.338 "subtype": "NVMe", 00:15:21.338 "listen_addresses": [ 00:15:21.338 { 00:15:21.338 "trtype": "VFIOUSER", 00:15:21.338 "adrfam": "IPv4", 00:15:21.338 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:21.338 "trsvcid": "0" 00:15:21.338 } 00:15:21.338 ], 00:15:21.338 "allow_any_host": true, 00:15:21.338 "hosts": [], 00:15:21.338 "serial_number": "SPDK2", 00:15:21.338 "model_number": "SPDK bdev Controller", 00:15:21.338 "max_namespaces": 32, 00:15:21.338 "min_cntlid": 1, 00:15:21.338 "max_cntlid": 65519, 00:15:21.338 "namespaces": [ 00:15:21.338 { 00:15:21.338 "nsid": 1, 00:15:21.338 "bdev_name": "Malloc2", 00:15:21.338 "name": "Malloc2", 00:15:21.338 "nguid": "D73D3F6FD8C544B584D517FC17126916", 00:15:21.338 "uuid": "d73d3f6f-d8c5-44b5-84d5-17fc17126916" 00:15:21.338 } 00:15:21.338 ] 00:15:21.338 } 00:15:21.338 ] 00:15:21.338 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2689122 00:15:21.338 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:21.338 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:21.338 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:21.338 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:21.338 [2024-11-20 11:16:14.016669] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:15:21.338 [2024-11-20 11:16:14.016710] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2689353 ] 00:15:21.338 [2024-11-20 11:16:14.056368] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:21.338 [2024-11-20 11:16:14.061550] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:21.338 [2024-11-20 11:16:14.061570] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f4bc9c8c000 00:15:21.338 [2024-11-20 11:16:14.062555] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.338 [2024-11-20 11:16:14.063565] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.338 [2024-11-20 11:16:14.064569] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.338 [2024-11-20 11:16:14.065576] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:21.338 [2024-11-20 11:16:14.066586] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:21.338 [2024-11-20 11:16:14.067590] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.338 [2024-11-20 11:16:14.068600] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:21.338 [2024-11-20 11:16:14.069607] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.338 [2024-11-20 11:16:14.070619] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:21.338 [2024-11-20 11:16:14.070627] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f4bc9c81000 00:15:21.338 [2024-11-20 11:16:14.071538] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:21.600 [2024-11-20 11:16:14.080920] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:21.600 [2024-11-20 11:16:14.080940] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:21.600 [2024-11-20 11:16:14.086012] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:21.600 [2024-11-20 11:16:14.086046] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:21.600 [2024-11-20 11:16:14.086108] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:21.600 [2024-11-20 11:16:14.086117] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:21.600 [2024-11-20 11:16:14.086121] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:21.600 [2024-11-20 11:16:14.087016] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:21.600 [2024-11-20 11:16:14.087024] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:21.600 [2024-11-20 11:16:14.087029] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:21.600 [2024-11-20 11:16:14.088021] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:21.600 [2024-11-20 11:16:14.088028] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:21.600 [2024-11-20 11:16:14.088034] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:21.600 [2024-11-20 11:16:14.089026] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:21.600 [2024-11-20 11:16:14.089033] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:21.601 [2024-11-20 11:16:14.090031] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:21.601 [2024-11-20 11:16:14.090038] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:21.601 [2024-11-20 11:16:14.090042] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:21.601 [2024-11-20 11:16:14.090047] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:21.601 [2024-11-20 11:16:14.090153] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:21.601 [2024-11-20 11:16:14.090157] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:21.601 [2024-11-20 11:16:14.090164] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:21.601 [2024-11-20 11:16:14.091035] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:21.601 [2024-11-20 11:16:14.092039] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:21.601 [2024-11-20 11:16:14.093047] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:21.601 [2024-11-20 11:16:14.094048] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:21.601 [2024-11-20 11:16:14.094077] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:21.601 [2024-11-20 11:16:14.095058] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:21.601 [2024-11-20 11:16:14.095064] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:21.601 [2024-11-20 11:16:14.095069] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:21.601 [2024-11-20 11:16:14.095084] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:21.601 [2024-11-20 11:16:14.095090] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:21.601 [2024-11-20 11:16:14.095099] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:21.601 [2024-11-20 11:16:14.095102] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:21.601 [2024-11-20 11:16:14.095105] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:21.601 [2024-11-20 11:16:14.095114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:21.601 [2024-11-20 11:16:14.099166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:21.601 [2024-11-20 11:16:14.099174] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:21.601 [2024-11-20 11:16:14.099178] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:21.601 [2024-11-20 11:16:14.099181] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:21.601 [2024-11-20 11:16:14.099184] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:21.601 [2024-11-20 11:16:14.099189] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:21.601 [2024-11-20 11:16:14.099193] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:21.601 [2024-11-20 11:16:14.099196] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:21.601 [2024-11-20 11:16:14.099203] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:21.601 [2024-11-20 11:16:14.099210] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:21.601 [2024-11-20 11:16:14.107164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:21.601 [2024-11-20 11:16:14.107173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.601 [2024-11-20 11:16:14.107179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.601 [2024-11-20 11:16:14.107185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.601 [2024-11-20 11:16:14.107191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.601 [2024-11-20 11:16:14.107195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:21.601 [2024-11-20 11:16:14.107200] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:21.601 [2024-11-20 11:16:14.107206] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:21.601 [2024-11-20 11:16:14.115162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:21.601 [2024-11-20 11:16:14.115169] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:21.601 [2024-11-20 11:16:14.115173] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:21.601 [2024-11-20 11:16:14.115178] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:21.601 [2024-11-20 11:16:14.115182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:21.601 [2024-11-20 11:16:14.115188] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:21.601 [2024-11-20 11:16:14.123163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:21.601 [2024-11-20 11:16:14.123208] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:21.601 [2024-11-20 11:16:14.123214] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:21.601 [2024-11-20 11:16:14.123219] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:21.601 [2024-11-20 11:16:14.123222] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:21.601 [2024-11-20 11:16:14.123225] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:21.601 [2024-11-20 11:16:14.123229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:21.601 [2024-11-20 11:16:14.131162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:21.601 [2024-11-20 11:16:14.131170] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:21.601 [2024-11-20 11:16:14.131180] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:21.601 [2024-11-20 11:16:14.131186] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:21.601 [2024-11-20 11:16:14.131191] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:21.601 [2024-11-20 11:16:14.131194] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:21.601 [2024-11-20 11:16:14.131196] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:21.601 [2024-11-20 11:16:14.131201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:21.601 [2024-11-20 11:16:14.139161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:21.601 [2024-11-20 11:16:14.139172] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:21.601 [2024-11-20 11:16:14.139177] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:21.601 [2024-11-20 11:16:14.139182] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:21.601 [2024-11-20 11:16:14.139187] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:21.601 [2024-11-20 11:16:14.139189] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:21.601 [2024-11-20 11:16:14.139194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:21.601 [2024-11-20 11:16:14.147163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:21.601 [2024-11-20 11:16:14.147178] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:21.601 [2024-11-20 11:16:14.147183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:21.601 [2024-11-20 11:16:14.147189] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:21.601 [2024-11-20 11:16:14.147193] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:21.602 [2024-11-20 11:16:14.147196] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:21.602 [2024-11-20 11:16:14.147200] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:21.602 [2024-11-20 11:16:14.147204] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:21.602 [2024-11-20 11:16:14.147207] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:21.602 [2024-11-20 11:16:14.147210] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:21.602 [2024-11-20 11:16:14.147223] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:21.602 [2024-11-20 11:16:14.155163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:21.602 [2024-11-20 11:16:14.155173] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:21.602 [2024-11-20 11:16:14.163162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:21.602 [2024-11-20 11:16:14.163171] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:21.602 [2024-11-20 11:16:14.171162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:21.602 [2024-11-20 11:16:14.171171] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:21.602 [2024-11-20 11:16:14.179162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:21.602 [2024-11-20 11:16:14.179174] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:21.602 [2024-11-20 11:16:14.179177] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:21.602 [2024-11-20 11:16:14.179180] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:21.602 [2024-11-20 11:16:14.179182] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:21.602 [2024-11-20 11:16:14.179184] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:21.602 [2024-11-20 11:16:14.179189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:21.602 [2024-11-20 11:16:14.179196] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:21.602 [2024-11-20 11:16:14.179199] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:21.602 [2024-11-20 11:16:14.179201] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:21.602 [2024-11-20 11:16:14.179206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:21.602 [2024-11-20 11:16:14.179211] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:21.602 [2024-11-20 11:16:14.179214] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:21.602 [2024-11-20 11:16:14.179216] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:21.602 [2024-11-20 11:16:14.179220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:21.602 [2024-11-20 11:16:14.179226] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:21.602 [2024-11-20 11:16:14.179228] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:21.602 [2024-11-20 11:16:14.179231] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:21.602 [2024-11-20 11:16:14.179235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:21.602 [2024-11-20 11:16:14.187162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:21.602 [2024-11-20 11:16:14.187172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:21.602 [2024-11-20 11:16:14.187180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:21.602 [2024-11-20 11:16:14.187185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:21.602 ===================================================== 00:15:21.602 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:21.602 ===================================================== 00:15:21.602 Controller Capabilities/Features 00:15:21.602 ================================ 00:15:21.602 Vendor ID: 4e58 00:15:21.602 Subsystem Vendor ID: 4e58 00:15:21.602 Serial Number: SPDK2 00:15:21.602 Model Number: SPDK bdev Controller 00:15:21.602 Firmware Version: 25.01 00:15:21.602 Recommended Arb Burst: 6 00:15:21.602 IEEE OUI Identifier: 8d 6b 50 00:15:21.602 Multi-path I/O 00:15:21.602 May have multiple subsystem ports: Yes 00:15:21.602 May have multiple controllers: Yes 00:15:21.602 Associated with SR-IOV VF: No 00:15:21.602 Max Data Transfer Size: 131072 00:15:21.602 Max Number of Namespaces: 32 00:15:21.602 Max Number of I/O Queues: 127 00:15:21.602 NVMe Specification Version (VS): 1.3 00:15:21.602 NVMe Specification Version (Identify): 1.3 00:15:21.602 Maximum Queue Entries: 256 00:15:21.602 Contiguous Queues Required: Yes 00:15:21.602 Arbitration Mechanisms Supported 00:15:21.602 Weighted Round Robin: Not Supported 00:15:21.602 Vendor Specific: Not Supported 00:15:21.602 Reset Timeout: 15000 ms 00:15:21.602 Doorbell Stride: 4 bytes 00:15:21.602 NVM Subsystem Reset: Not Supported 00:15:21.602 Command Sets Supported 00:15:21.602 NVM Command Set: Supported 00:15:21.602 Boot Partition: Not Supported 00:15:21.602 Memory Page Size Minimum: 4096 bytes 00:15:21.602 Memory Page Size Maximum: 4096 bytes 00:15:21.602 Persistent Memory Region: Not Supported 00:15:21.602 Optional Asynchronous Events Supported 00:15:21.602 Namespace Attribute Notices: Supported 00:15:21.602 Firmware Activation Notices: Not Supported 00:15:21.602 ANA Change Notices: Not Supported 00:15:21.602 PLE Aggregate Log Change Notices: Not Supported 00:15:21.602 LBA Status Info Alert Notices: Not Supported 00:15:21.602 EGE Aggregate Log Change Notices: Not Supported 00:15:21.602 Normal NVM Subsystem Shutdown event: Not Supported 00:15:21.602 Zone Descriptor Change Notices: Not Supported 00:15:21.602 Discovery Log Change Notices: Not Supported 00:15:21.602 Controller Attributes 00:15:21.602 128-bit Host Identifier: Supported 00:15:21.602 Non-Operational Permissive Mode: Not Supported 00:15:21.602 NVM Sets: Not Supported 00:15:21.602 Read Recovery Levels: Not Supported 00:15:21.602 Endurance Groups: Not Supported 00:15:21.602 Predictable Latency Mode: Not Supported 00:15:21.602 Traffic Based Keep ALive: Not Supported 00:15:21.602 Namespace Granularity: Not Supported 00:15:21.602 SQ Associations: Not Supported 00:15:21.602 UUID List: Not Supported 00:15:21.602 Multi-Domain Subsystem: Not Supported 00:15:21.602 Fixed Capacity Management: Not Supported 00:15:21.602 Variable Capacity Management: Not Supported 00:15:21.602 Delete Endurance Group: Not Supported 00:15:21.602 Delete NVM Set: Not Supported 00:15:21.602 Extended LBA Formats Supported: Not Supported 00:15:21.602 Flexible Data Placement Supported: Not Supported 00:15:21.602 00:15:21.602 Controller Memory Buffer Support 00:15:21.602 ================================ 00:15:21.602 Supported: No 00:15:21.602 00:15:21.602 Persistent Memory Region Support 00:15:21.602 ================================ 00:15:21.602 Supported: No 00:15:21.602 00:15:21.602 Admin Command Set Attributes 00:15:21.602 ============================ 00:15:21.602 Security Send/Receive: Not Supported 00:15:21.602 Format NVM: Not Supported 00:15:21.602 Firmware Activate/Download: Not Supported 00:15:21.602 Namespace Management: Not Supported 00:15:21.602 Device Self-Test: Not Supported 00:15:21.602 Directives: Not Supported 00:15:21.602 NVMe-MI: Not Supported 00:15:21.602 Virtualization Management: Not Supported 00:15:21.602 Doorbell Buffer Config: Not Supported 00:15:21.602 Get LBA Status Capability: Not Supported 00:15:21.602 Command & Feature Lockdown Capability: Not Supported 00:15:21.602 Abort Command Limit: 4 00:15:21.602 Async Event Request Limit: 4 00:15:21.602 Number of Firmware Slots: N/A 00:15:21.602 Firmware Slot 1 Read-Only: N/A 00:15:21.602 Firmware Activation Without Reset: N/A 00:15:21.602 Multiple Update Detection Support: N/A 00:15:21.602 Firmware Update Granularity: No Information Provided 00:15:21.602 Per-Namespace SMART Log: No 00:15:21.602 Asymmetric Namespace Access Log Page: Not Supported 00:15:21.602 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:21.602 Command Effects Log Page: Supported 00:15:21.602 Get Log Page Extended Data: Supported 00:15:21.602 Telemetry Log Pages: Not Supported 00:15:21.602 Persistent Event Log Pages: Not Supported 00:15:21.602 Supported Log Pages Log Page: May Support 00:15:21.602 Commands Supported & Effects Log Page: Not Supported 00:15:21.602 Feature Identifiers & Effects Log Page:May Support 00:15:21.602 NVMe-MI Commands & Effects Log Page: May Support 00:15:21.602 Data Area 4 for Telemetry Log: Not Supported 00:15:21.603 Error Log Page Entries Supported: 128 00:15:21.603 Keep Alive: Supported 00:15:21.603 Keep Alive Granularity: 10000 ms 00:15:21.603 00:15:21.603 NVM Command Set Attributes 00:15:21.603 ========================== 00:15:21.603 Submission Queue Entry Size 00:15:21.603 Max: 64 00:15:21.603 Min: 64 00:15:21.603 Completion Queue Entry Size 00:15:21.603 Max: 16 00:15:21.603 Min: 16 00:15:21.603 Number of Namespaces: 32 00:15:21.603 Compare Command: Supported 00:15:21.603 Write Uncorrectable Command: Not Supported 00:15:21.603 Dataset Management Command: Supported 00:15:21.603 Write Zeroes Command: Supported 00:15:21.603 Set Features Save Field: Not Supported 00:15:21.603 Reservations: Not Supported 00:15:21.603 Timestamp: Not Supported 00:15:21.603 Copy: Supported 00:15:21.603 Volatile Write Cache: Present 00:15:21.603 Atomic Write Unit (Normal): 1 00:15:21.603 Atomic Write Unit (PFail): 1 00:15:21.603 Atomic Compare & Write Unit: 1 00:15:21.603 Fused Compare & Write: Supported 00:15:21.603 Scatter-Gather List 00:15:21.603 SGL Command Set: Supported (Dword aligned) 00:15:21.603 SGL Keyed: Not Supported 00:15:21.603 SGL Bit Bucket Descriptor: Not Supported 00:15:21.603 SGL Metadata Pointer: Not Supported 00:15:21.603 Oversized SGL: Not Supported 00:15:21.603 SGL Metadata Address: Not Supported 00:15:21.603 SGL Offset: Not Supported 00:15:21.603 Transport SGL Data Block: Not Supported 00:15:21.603 Replay Protected Memory Block: Not Supported 00:15:21.603 00:15:21.603 Firmware Slot Information 00:15:21.603 ========================= 00:15:21.603 Active slot: 1 00:15:21.603 Slot 1 Firmware Revision: 25.01 00:15:21.603 00:15:21.603 00:15:21.603 Commands Supported and Effects 00:15:21.603 ============================== 00:15:21.603 Admin Commands 00:15:21.603 -------------- 00:15:21.603 Get Log Page (02h): Supported 00:15:21.603 Identify (06h): Supported 00:15:21.603 Abort (08h): Supported 00:15:21.603 Set Features (09h): Supported 00:15:21.603 Get Features (0Ah): Supported 00:15:21.603 Asynchronous Event Request (0Ch): Supported 00:15:21.603 Keep Alive (18h): Supported 00:15:21.603 I/O Commands 00:15:21.603 ------------ 00:15:21.603 Flush (00h): Supported LBA-Change 00:15:21.603 Write (01h): Supported LBA-Change 00:15:21.603 Read (02h): Supported 00:15:21.603 Compare (05h): Supported 00:15:21.603 Write Zeroes (08h): Supported LBA-Change 00:15:21.603 Dataset Management (09h): Supported LBA-Change 00:15:21.603 Copy (19h): Supported LBA-Change 00:15:21.603 00:15:21.603 Error Log 00:15:21.603 ========= 00:15:21.603 00:15:21.603 Arbitration 00:15:21.603 =========== 00:15:21.603 Arbitration Burst: 1 00:15:21.603 00:15:21.603 Power Management 00:15:21.603 ================ 00:15:21.603 Number of Power States: 1 00:15:21.603 Current Power State: Power State #0 00:15:21.603 Power State #0: 00:15:21.603 Max Power: 0.00 W 00:15:21.603 Non-Operational State: Operational 00:15:21.603 Entry Latency: Not Reported 00:15:21.603 Exit Latency: Not Reported 00:15:21.603 Relative Read Throughput: 0 00:15:21.603 Relative Read Latency: 0 00:15:21.603 Relative Write Throughput: 0 00:15:21.603 Relative Write Latency: 0 00:15:21.603 Idle Power: Not Reported 00:15:21.603 Active Power: Not Reported 00:15:21.603 Non-Operational Permissive Mode: Not Supported 00:15:21.603 00:15:21.603 Health Information 00:15:21.603 ================== 00:15:21.603 Critical Warnings: 00:15:21.603 Available Spare Space: OK 00:15:21.603 Temperature: OK 00:15:21.603 Device Reliability: OK 00:15:21.603 Read Only: No 00:15:21.603 Volatile Memory Backup: OK 00:15:21.603 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:21.603 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:21.603 Available Spare: 0% 00:15:21.603 Available Sp[2024-11-20 11:16:14.187260] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:21.603 [2024-11-20 11:16:14.195162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:21.603 [2024-11-20 11:16:14.195184] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:21.603 [2024-11-20 11:16:14.195191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.603 [2024-11-20 11:16:14.195196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.603 [2024-11-20 11:16:14.195200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.603 [2024-11-20 11:16:14.195205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.603 [2024-11-20 11:16:14.195236] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:21.603 [2024-11-20 11:16:14.195243] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:21.603 [2024-11-20 11:16:14.196238] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:21.603 [2024-11-20 11:16:14.199167] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:21.603 [2024-11-20 11:16:14.199174] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:21.603 [2024-11-20 11:16:14.199257] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:21.603 [2024-11-20 11:16:14.199264] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:21.603 [2024-11-20 11:16:14.199304] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:21.603 [2024-11-20 11:16:14.200276] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:21.603 are Threshold: 0% 00:15:21.603 Life Percentage Used: 0% 00:15:21.603 Data Units Read: 0 00:15:21.603 Data Units Written: 0 00:15:21.603 Host Read Commands: 0 00:15:21.603 Host Write Commands: 0 00:15:21.603 Controller Busy Time: 0 minutes 00:15:21.603 Power Cycles: 0 00:15:21.603 Power On Hours: 0 hours 00:15:21.603 Unsafe Shutdowns: 0 00:15:21.603 Unrecoverable Media Errors: 0 00:15:21.603 Lifetime Error Log Entries: 0 00:15:21.603 Warning Temperature Time: 0 minutes 00:15:21.603 Critical Temperature Time: 0 minutes 00:15:21.603 00:15:21.603 Number of Queues 00:15:21.603 ================ 00:15:21.603 Number of I/O Submission Queues: 127 00:15:21.603 Number of I/O Completion Queues: 127 00:15:21.603 00:15:21.603 Active Namespaces 00:15:21.603 ================= 00:15:21.603 Namespace ID:1 00:15:21.603 Error Recovery Timeout: Unlimited 00:15:21.603 Command Set Identifier: NVM (00h) 00:15:21.603 Deallocate: Supported 00:15:21.603 Deallocated/Unwritten Error: Not Supported 00:15:21.603 Deallocated Read Value: Unknown 00:15:21.603 Deallocate in Write Zeroes: Not Supported 00:15:21.603 Deallocated Guard Field: 0xFFFF 00:15:21.603 Flush: Supported 00:15:21.603 Reservation: Supported 00:15:21.603 Namespace Sharing Capabilities: Multiple Controllers 00:15:21.603 Size (in LBAs): 131072 (0GiB) 00:15:21.603 Capacity (in LBAs): 131072 (0GiB) 00:15:21.603 Utilization (in LBAs): 131072 (0GiB) 00:15:21.603 NGUID: D73D3F6FD8C544B584D517FC17126916 00:15:21.603 UUID: d73d3f6f-d8c5-44b5-84d5-17fc17126916 00:15:21.603 Thin Provisioning: Not Supported 00:15:21.603 Per-NS Atomic Units: Yes 00:15:21.603 Atomic Boundary Size (Normal): 0 00:15:21.603 Atomic Boundary Size (PFail): 0 00:15:21.603 Atomic Boundary Offset: 0 00:15:21.603 Maximum Single Source Range Length: 65535 00:15:21.603 Maximum Copy Length: 65535 00:15:21.603 Maximum Source Range Count: 1 00:15:21.603 NGUID/EUI64 Never Reused: No 00:15:21.603 Namespace Write Protected: No 00:15:21.603 Number of LBA Formats: 1 00:15:21.603 Current LBA Format: LBA Format #00 00:15:21.603 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:21.603 00:15:21.603 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:21.908 [2024-11-20 11:16:14.390571] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:27.190 Initializing NVMe Controllers 00:15:27.190 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:27.190 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:27.190 Initialization complete. Launching workers. 00:15:27.190 ======================================================== 00:15:27.190 Latency(us) 00:15:27.190 Device Information : IOPS MiB/s Average min max 00:15:27.190 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40024.00 156.34 3198.15 842.28 8683.14 00:15:27.190 ======================================================== 00:15:27.190 Total : 40024.00 156.34 3198.15 842.28 8683.14 00:15:27.190 00:15:27.190 [2024-11-20 11:16:19.492358] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:27.190 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:27.190 [2024-11-20 11:16:19.683921] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:32.472 Initializing NVMe Controllers 00:15:32.472 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:32.472 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:32.472 Initialization complete. Launching workers. 00:15:32.472 ======================================================== 00:15:32.472 Latency(us) 00:15:32.472 Device Information : IOPS MiB/s Average min max 00:15:32.472 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40033.20 156.38 3197.33 844.37 7753.89 00:15:32.472 ======================================================== 00:15:32.472 Total : 40033.20 156.38 3197.33 844.37 7753.89 00:15:32.472 00:15:32.472 [2024-11-20 11:16:24.703941] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:32.472 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:32.473 [2024-11-20 11:16:24.903551] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:37.757 [2024-11-20 11:16:30.039251] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:37.757 Initializing NVMe Controllers 00:15:37.757 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:37.757 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:37.757 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:37.757 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:37.757 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:37.757 Initialization complete. Launching workers. 00:15:37.757 Starting thread on core 2 00:15:37.757 Starting thread on core 3 00:15:37.757 Starting thread on core 1 00:15:37.757 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:37.757 [2024-11-20 11:16:30.286530] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:41.057 [2024-11-20 11:16:33.346377] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:41.057 Initializing NVMe Controllers 00:15:41.057 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:41.057 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:41.058 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:41.058 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:41.058 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:41.058 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:41.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:41.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:41.058 Initialization complete. Launching workers. 00:15:41.058 Starting thread on core 1 with urgent priority queue 00:15:41.058 Starting thread on core 2 with urgent priority queue 00:15:41.058 Starting thread on core 3 with urgent priority queue 00:15:41.058 Starting thread on core 0 with urgent priority queue 00:15:41.058 SPDK bdev Controller (SPDK2 ) core 0: 13364.00 IO/s 7.48 secs/100000 ios 00:15:41.058 SPDK bdev Controller (SPDK2 ) core 1: 12116.00 IO/s 8.25 secs/100000 ios 00:15:41.058 SPDK bdev Controller (SPDK2 ) core 2: 12065.33 IO/s 8.29 secs/100000 ios 00:15:41.058 SPDK bdev Controller (SPDK2 ) core 3: 12640.00 IO/s 7.91 secs/100000 ios 00:15:41.058 ======================================================== 00:15:41.058 00:15:41.058 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:41.058 [2024-11-20 11:16:33.580305] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:41.058 Initializing NVMe Controllers 00:15:41.058 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:41.058 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:41.058 Namespace ID: 1 size: 0GB 00:15:41.058 Initialization complete. 00:15:41.058 INFO: using host memory buffer for IO 00:15:41.058 Hello world! 00:15:41.058 [2024-11-20 11:16:33.592365] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:41.058 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:41.318 [2024-11-20 11:16:33.827820] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:42.260 Initializing NVMe Controllers 00:15:42.260 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:42.260 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:42.260 Initialization complete. Launching workers. 00:15:42.260 submit (in ns) avg, min, max = 5721.4, 2825.8, 3999305.8 00:15:42.260 complete (in ns) avg, min, max = 17575.4, 1639.2, 4001729.2 00:15:42.260 00:15:42.260 Submit histogram 00:15:42.260 ================ 00:15:42.260 Range in us Cumulative Count 00:15:42.260 2.813 - 2.827: 0.0049% ( 1) 00:15:42.260 2.827 - 2.840: 0.6765% ( 137) 00:15:42.260 2.840 - 2.853: 1.8237% ( 234) 00:15:42.260 2.853 - 2.867: 4.6720% ( 581) 00:15:42.260 2.867 - 2.880: 9.5402% ( 993) 00:15:42.260 2.880 - 2.893: 15.6143% ( 1239) 00:15:42.260 2.893 - 2.907: 20.9187% ( 1082) 00:15:42.260 2.907 - 2.920: 27.2135% ( 1284) 00:15:42.260 2.920 - 2.933: 32.7875% ( 1137) 00:15:42.260 2.933 - 2.947: 38.1263% ( 1089) 00:15:42.260 2.947 - 2.960: 43.2788% ( 1051) 00:15:42.260 2.960 - 2.973: 48.3234% ( 1029) 00:15:42.260 2.973 - 2.987: 54.0004% ( 1158) 00:15:42.260 2.987 - 3.000: 61.3639% ( 1502) 00:15:42.260 3.000 - 3.013: 69.9480% ( 1751) 00:15:42.260 3.013 - 3.027: 78.2920% ( 1702) 00:15:42.260 3.027 - 3.040: 85.8663% ( 1545) 00:15:42.260 3.040 - 3.053: 91.7835% ( 1207) 00:15:42.260 3.053 - 3.067: 95.0436% ( 665) 00:15:42.260 3.067 - 3.080: 97.3331% ( 467) 00:15:42.260 3.080 - 3.093: 98.6469% ( 268) 00:15:42.260 3.093 - 3.107: 99.2401% ( 121) 00:15:42.260 3.107 - 3.120: 99.4754% ( 48) 00:15:42.260 3.120 - 3.133: 99.5490% ( 15) 00:15:42.260 3.133 - 3.147: 99.5735% ( 5) 00:15:42.260 3.147 - 3.160: 99.5784% ( 1) 00:15:42.260 3.200 - 3.213: 99.5833% ( 1) 00:15:42.260 3.253 - 3.267: 99.5882% ( 1) 00:15:42.260 3.280 - 3.293: 99.5931% ( 1) 00:15:42.260 3.400 - 3.413: 99.5980% ( 1) 00:15:42.260 3.440 - 3.467: 99.6029% ( 1) 00:15:42.260 3.707 - 3.733: 99.6127% ( 2) 00:15:42.260 3.787 - 3.813: 99.6225% ( 2) 00:15:42.260 3.893 - 3.920: 99.6274% ( 1) 00:15:42.260 3.973 - 4.000: 99.6323% ( 1) 00:15:42.260 4.027 - 4.053: 99.6372% ( 1) 00:15:42.260 4.160 - 4.187: 99.6421% ( 1) 00:15:42.260 4.373 - 4.400: 99.6470% ( 1) 00:15:42.260 4.453 - 4.480: 99.6519% ( 1) 00:15:42.260 4.587 - 4.613: 99.6617% ( 2) 00:15:42.260 4.667 - 4.693: 99.6666% ( 1) 00:15:42.260 4.720 - 4.747: 99.6715% ( 1) 00:15:42.260 4.773 - 4.800: 99.6764% ( 1) 00:15:42.260 4.827 - 4.853: 99.6813% ( 1) 00:15:42.260 4.853 - 4.880: 99.6862% ( 1) 00:15:42.260 4.907 - 4.933: 99.6960% ( 2) 00:15:42.260 4.960 - 4.987: 99.7010% ( 1) 00:15:42.260 4.987 - 5.013: 99.7108% ( 2) 00:15:42.260 5.067 - 5.093: 99.7157% ( 1) 00:15:42.260 5.147 - 5.173: 99.7206% ( 1) 00:15:42.260 5.173 - 5.200: 99.7255% ( 1) 00:15:42.260 5.733 - 5.760: 99.7304% ( 1) 00:15:42.260 5.813 - 5.840: 99.7353% ( 1) 00:15:42.260 6.000 - 6.027: 99.7402% ( 1) 00:15:42.260 6.027 - 6.053: 99.7451% ( 1) 00:15:42.260 6.053 - 6.080: 99.7549% ( 2) 00:15:42.260 6.080 - 6.107: 99.7647% ( 2) 00:15:42.260 6.107 - 6.133: 99.7745% ( 2) 00:15:42.260 6.187 - 6.213: 99.7794% ( 1) 00:15:42.260 6.213 - 6.240: 99.7843% ( 1) 00:15:42.260 6.293 - 6.320: 99.7892% ( 1) 00:15:42.260 6.320 - 6.347: 99.7941% ( 1) 00:15:42.260 6.347 - 6.373: 99.7990% ( 1) 00:15:42.260 6.400 - 6.427: 99.8039% ( 1) 00:15:42.260 6.480 - 6.507: 99.8137% ( 2) 00:15:42.260 6.640 - 6.667: 99.8186% ( 1) 00:15:42.260 6.667 - 6.693: 99.8284% ( 2) 00:15:42.260 6.720 - 6.747: 99.8382% ( 2) 00:15:42.260 6.773 - 6.800: 99.8480% ( 2) 00:15:42.260 6.827 - 6.880: 99.8529% ( 1) 00:15:42.260 6.880 - 6.933: 99.8578% ( 1) 00:15:42.261 6.933 - 6.987: 99.8627% ( 1) 00:15:42.261 6.987 - 7.040: 99.8725% ( 2) 00:15:42.261 7.040 - 7.093: 99.8774% ( 1) 00:15:42.261 7.093 - 7.147: 99.8872% ( 2) 00:15:42.261 7.147 - 7.200: 99.8970% ( 2) 00:15:42.261 7.413 - 7.467: 99.9020% ( 1) 00:15:42.261 7.680 - 7.733: 99.9069% ( 1) 00:15:42.261 8.000 - 8.053: 99.9118% ( 1) 00:15:42.261 [2024-11-20 11:16:34.922670] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:42.261 8.320 - 8.373: 99.9167% ( 1) 00:15:42.261 8.480 - 8.533: 99.9216% ( 1) 00:15:42.261 10.987 - 11.040: 99.9265% ( 1) 00:15:42.261 13.013 - 13.067: 99.9314% ( 1) 00:15:42.261 3986.773 - 4014.080: 100.0000% ( 14) 00:15:42.261 00:15:42.261 Complete histogram 00:15:42.261 ================== 00:15:42.261 Range in us Cumulative Count 00:15:42.261 1.633 - 1.640: 0.0049% ( 1) 00:15:42.261 1.640 - 1.647: 0.6765% ( 137) 00:15:42.261 1.647 - 1.653: 1.0050% ( 67) 00:15:42.261 1.653 - 1.660: 1.0981% ( 19) 00:15:42.261 1.660 - 1.667: 1.2746% ( 36) 00:15:42.261 1.667 - 1.673: 1.4266% ( 31) 00:15:42.261 1.673 - 1.680: 16.8742% ( 3151) 00:15:42.261 1.680 - 1.687: 52.7209% ( 7312) 00:15:42.261 1.687 - 1.693: 55.2260% ( 511) 00:15:42.261 1.693 - 1.700: 66.8203% ( 2365) 00:15:42.261 1.700 - 1.707: 75.1299% ( 1695) 00:15:42.261 1.707 - 1.720: 82.5963% ( 1523) 00:15:42.261 1.720 - 1.733: 83.6553% ( 216) 00:15:42.261 1.733 - 1.747: 87.6213% ( 809) 00:15:42.261 1.747 - 1.760: 93.1611% ( 1130) 00:15:42.261 1.760 - 1.773: 97.0340% ( 790) 00:15:42.261 1.773 - 1.787: 98.8038% ( 361) 00:15:42.261 1.787 - 1.800: 99.3578% ( 113) 00:15:42.261 1.800 - 1.813: 99.4558% ( 20) 00:15:42.261 1.813 - 1.827: 99.4656% ( 2) 00:15:42.261 3.347 - 3.360: 99.4705% ( 1) 00:15:42.261 3.400 - 3.413: 99.4754% ( 1) 00:15:42.261 3.947 - 3.973: 99.4803% ( 1) 00:15:42.261 4.533 - 4.560: 99.4852% ( 1) 00:15:42.261 4.827 - 4.853: 99.4901% ( 1) 00:15:42.261 4.853 - 4.880: 99.4950% ( 1) 00:15:42.261 4.933 - 4.960: 99.5000% ( 1) 00:15:42.261 4.960 - 4.987: 99.5049% ( 1) 00:15:42.261 5.093 - 5.120: 99.5098% ( 1) 00:15:42.261 5.120 - 5.147: 99.5147% ( 1) 00:15:42.261 5.440 - 5.467: 99.5245% ( 2) 00:15:42.261 5.493 - 5.520: 99.5294% ( 1) 00:15:42.261 5.573 - 5.600: 99.5343% ( 1) 00:15:42.261 5.707 - 5.733: 99.5392% ( 1) 00:15:42.261 5.760 - 5.787: 99.5441% ( 1) 00:15:42.261 5.787 - 5.813: 99.5490% ( 1) 00:15:42.261 5.867 - 5.893: 99.5539% ( 1) 00:15:42.261 5.947 - 5.973: 99.5588% ( 1) 00:15:42.261 6.000 - 6.027: 99.5637% ( 1) 00:15:42.261 6.160 - 6.187: 99.5686% ( 1) 00:15:42.261 6.533 - 6.560: 99.5735% ( 1) 00:15:42.261 7.147 - 7.200: 99.5784% ( 1) 00:15:42.261 9.173 - 9.227: 99.5833% ( 1) 00:15:42.261 11.200 - 11.253: 99.5882% ( 1) 00:15:42.261 11.360 - 11.413: 99.5931% ( 1) 00:15:42.261 33.707 - 33.920: 99.5980% ( 1) 00:15:42.261 117.760 - 118.613: 99.6029% ( 1) 00:15:42.261 3986.773 - 4014.080: 100.0000% ( 81) 00:15:42.261 00:15:42.261 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:42.261 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:42.261 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:42.261 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:42.261 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:42.522 [ 00:15:42.522 { 00:15:42.522 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:42.522 "subtype": "Discovery", 00:15:42.522 "listen_addresses": [], 00:15:42.522 "allow_any_host": true, 00:15:42.522 "hosts": [] 00:15:42.522 }, 00:15:42.522 { 00:15:42.522 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:42.522 "subtype": "NVMe", 00:15:42.522 "listen_addresses": [ 00:15:42.522 { 00:15:42.522 "trtype": "VFIOUSER", 00:15:42.522 "adrfam": "IPv4", 00:15:42.522 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:42.522 "trsvcid": "0" 00:15:42.522 } 00:15:42.522 ], 00:15:42.522 "allow_any_host": true, 00:15:42.522 "hosts": [], 00:15:42.522 "serial_number": "SPDK1", 00:15:42.522 "model_number": "SPDK bdev Controller", 00:15:42.522 "max_namespaces": 32, 00:15:42.522 "min_cntlid": 1, 00:15:42.522 "max_cntlid": 65519, 00:15:42.522 "namespaces": [ 00:15:42.522 { 00:15:42.522 "nsid": 1, 00:15:42.522 "bdev_name": "Malloc1", 00:15:42.522 "name": "Malloc1", 00:15:42.522 "nguid": "F0C00E92325D4FABBB70DFD441B27E21", 00:15:42.522 "uuid": "f0c00e92-325d-4fab-bb70-dfd441b27e21" 00:15:42.522 }, 00:15:42.522 { 00:15:42.522 "nsid": 2, 00:15:42.522 "bdev_name": "Malloc3", 00:15:42.522 "name": "Malloc3", 00:15:42.522 "nguid": "14F519BA8A8949CF9A5F0F66A5A5688E", 00:15:42.522 "uuid": "14f519ba-8a89-49cf-9a5f-0f66a5a5688e" 00:15:42.522 } 00:15:42.522 ] 00:15:42.522 }, 00:15:42.522 { 00:15:42.522 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:42.522 "subtype": "NVMe", 00:15:42.522 "listen_addresses": [ 00:15:42.522 { 00:15:42.522 "trtype": "VFIOUSER", 00:15:42.522 "adrfam": "IPv4", 00:15:42.522 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:42.522 "trsvcid": "0" 00:15:42.522 } 00:15:42.522 ], 00:15:42.522 "allow_any_host": true, 00:15:42.522 "hosts": [], 00:15:42.522 "serial_number": "SPDK2", 00:15:42.522 "model_number": "SPDK bdev Controller", 00:15:42.522 "max_namespaces": 32, 00:15:42.522 "min_cntlid": 1, 00:15:42.522 "max_cntlid": 65519, 00:15:42.522 "namespaces": [ 00:15:42.522 { 00:15:42.522 "nsid": 1, 00:15:42.522 "bdev_name": "Malloc2", 00:15:42.522 "name": "Malloc2", 00:15:42.522 "nguid": "D73D3F6FD8C544B584D517FC17126916", 00:15:42.522 "uuid": "d73d3f6f-d8c5-44b5-84d5-17fc17126916" 00:15:42.522 } 00:15:42.522 ] 00:15:42.522 } 00:15:42.522 ] 00:15:42.522 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:42.522 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2693387 00:15:42.522 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:42.522 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:42.522 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:42.522 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:42.522 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:42.522 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:42.522 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:42.522 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:42.783 [2024-11-20 11:16:35.304508] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:42.783 Malloc4 00:15:42.783 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:42.783 [2024-11-20 11:16:35.482716] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:42.783 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:42.783 Asynchronous Event Request test 00:15:42.783 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:42.783 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:42.783 Registering asynchronous event callbacks... 00:15:42.783 Starting namespace attribute notice tests for all controllers... 00:15:42.783 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:42.783 aer_cb - Changed Namespace 00:15:42.783 Cleaning up... 00:15:43.043 [ 00:15:43.043 { 00:15:43.043 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:43.043 "subtype": "Discovery", 00:15:43.043 "listen_addresses": [], 00:15:43.043 "allow_any_host": true, 00:15:43.043 "hosts": [] 00:15:43.043 }, 00:15:43.043 { 00:15:43.043 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:43.043 "subtype": "NVMe", 00:15:43.043 "listen_addresses": [ 00:15:43.043 { 00:15:43.043 "trtype": "VFIOUSER", 00:15:43.044 "adrfam": "IPv4", 00:15:43.044 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:43.044 "trsvcid": "0" 00:15:43.044 } 00:15:43.044 ], 00:15:43.044 "allow_any_host": true, 00:15:43.044 "hosts": [], 00:15:43.044 "serial_number": "SPDK1", 00:15:43.044 "model_number": "SPDK bdev Controller", 00:15:43.044 "max_namespaces": 32, 00:15:43.044 "min_cntlid": 1, 00:15:43.044 "max_cntlid": 65519, 00:15:43.044 "namespaces": [ 00:15:43.044 { 00:15:43.044 "nsid": 1, 00:15:43.044 "bdev_name": "Malloc1", 00:15:43.044 "name": "Malloc1", 00:15:43.044 "nguid": "F0C00E92325D4FABBB70DFD441B27E21", 00:15:43.044 "uuid": "f0c00e92-325d-4fab-bb70-dfd441b27e21" 00:15:43.044 }, 00:15:43.044 { 00:15:43.044 "nsid": 2, 00:15:43.044 "bdev_name": "Malloc3", 00:15:43.044 "name": "Malloc3", 00:15:43.044 "nguid": "14F519BA8A8949CF9A5F0F66A5A5688E", 00:15:43.044 "uuid": "14f519ba-8a89-49cf-9a5f-0f66a5a5688e" 00:15:43.044 } 00:15:43.044 ] 00:15:43.044 }, 00:15:43.044 { 00:15:43.044 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:43.044 "subtype": "NVMe", 00:15:43.044 "listen_addresses": [ 00:15:43.044 { 00:15:43.044 "trtype": "VFIOUSER", 00:15:43.044 "adrfam": "IPv4", 00:15:43.044 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:43.044 "trsvcid": "0" 00:15:43.044 } 00:15:43.044 ], 00:15:43.044 "allow_any_host": true, 00:15:43.044 "hosts": [], 00:15:43.044 "serial_number": "SPDK2", 00:15:43.044 "model_number": "SPDK bdev Controller", 00:15:43.044 "max_namespaces": 32, 00:15:43.044 "min_cntlid": 1, 00:15:43.044 "max_cntlid": 65519, 00:15:43.044 "namespaces": [ 00:15:43.044 { 00:15:43.044 "nsid": 1, 00:15:43.044 "bdev_name": "Malloc2", 00:15:43.044 "name": "Malloc2", 00:15:43.044 "nguid": "D73D3F6FD8C544B584D517FC17126916", 00:15:43.044 "uuid": "d73d3f6f-d8c5-44b5-84d5-17fc17126916" 00:15:43.044 }, 00:15:43.044 { 00:15:43.044 "nsid": 2, 00:15:43.044 "bdev_name": "Malloc4", 00:15:43.044 "name": "Malloc4", 00:15:43.044 "nguid": "584BD644AF274489AC78936789075E3A", 00:15:43.044 "uuid": "584bd644-af27-4489-ac78-936789075e3a" 00:15:43.044 } 00:15:43.044 ] 00:15:43.044 } 00:15:43.044 ] 00:15:43.044 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2693387 00:15:43.044 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:43.044 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2684303 00:15:43.044 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2684303 ']' 00:15:43.044 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2684303 00:15:43.044 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:43.044 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:43.044 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2684303 00:15:43.044 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:43.044 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:43.044 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2684303' 00:15:43.044 killing process with pid 2684303 00:15:43.044 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2684303 00:15:43.044 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2684303 00:15:43.305 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:43.305 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:43.305 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:43.305 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:43.305 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:43.305 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2693539 00:15:43.305 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2693539' 00:15:43.305 Process pid: 2693539 00:15:43.305 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:43.305 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:43.305 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2693539 00:15:43.305 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2693539 ']' 00:15:43.305 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.305 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:43.306 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.306 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:43.306 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:43.306 [2024-11-20 11:16:35.955857] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:43.306 [2024-11-20 11:16:35.956797] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:15:43.306 [2024-11-20 11:16:35.956841] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.306 [2024-11-20 11:16:36.042220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:43.566 [2024-11-20 11:16:36.077214] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:43.566 [2024-11-20 11:16:36.077250] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:43.566 [2024-11-20 11:16:36.077255] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:43.566 [2024-11-20 11:16:36.077261] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:43.566 [2024-11-20 11:16:36.077266] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:43.566 [2024-11-20 11:16:36.078648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.566 [2024-11-20 11:16:36.078693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:43.566 [2024-11-20 11:16:36.078846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.566 [2024-11-20 11:16:36.078848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:43.566 [2024-11-20 11:16:36.132117] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:43.566 [2024-11-20 11:16:36.133141] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:43.566 [2024-11-20 11:16:36.133909] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:43.566 [2024-11-20 11:16:36.134491] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:43.566 [2024-11-20 11:16:36.134509] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:44.137 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:44.137 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:44.137 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:45.079 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:45.341 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:45.341 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:45.341 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:45.341 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:45.341 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:45.601 Malloc1 00:15:45.601 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:45.863 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:45.863 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:46.123 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:46.123 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:46.123 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:46.383 Malloc2 00:15:46.383 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:46.646 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:46.646 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:46.907 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:46.907 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2693539 00:15:46.908 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2693539 ']' 00:15:46.908 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2693539 00:15:46.908 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:46.908 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:46.908 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2693539 00:15:46.908 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:46.908 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:46.908 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2693539' 00:15:46.908 killing process with pid 2693539 00:15:46.908 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2693539 00:15:46.908 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2693539 00:15:47.168 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:47.168 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:47.168 00:15:47.168 real 0m50.902s 00:15:47.168 user 3m15.029s 00:15:47.168 sys 0m2.698s 00:15:47.168 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:47.168 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:47.168 ************************************ 00:15:47.168 END TEST nvmf_vfio_user 00:15:47.168 ************************************ 00:15:47.168 11:16:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:47.168 11:16:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:47.168 11:16:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:47.168 11:16:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:47.168 ************************************ 00:15:47.168 START TEST nvmf_vfio_user_nvme_compliance 00:15:47.168 ************************************ 00:15:47.168 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:47.168 * Looking for test storage... 00:15:47.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:47.168 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:47.168 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:15:47.168 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:47.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.430 --rc genhtml_branch_coverage=1 00:15:47.430 --rc genhtml_function_coverage=1 00:15:47.430 --rc genhtml_legend=1 00:15:47.430 --rc geninfo_all_blocks=1 00:15:47.430 --rc geninfo_unexecuted_blocks=1 00:15:47.430 00:15:47.430 ' 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:47.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.430 --rc genhtml_branch_coverage=1 00:15:47.430 --rc genhtml_function_coverage=1 00:15:47.430 --rc genhtml_legend=1 00:15:47.430 --rc geninfo_all_blocks=1 00:15:47.430 --rc geninfo_unexecuted_blocks=1 00:15:47.430 00:15:47.430 ' 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:47.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.430 --rc genhtml_branch_coverage=1 00:15:47.430 --rc genhtml_function_coverage=1 00:15:47.430 --rc genhtml_legend=1 00:15:47.430 --rc geninfo_all_blocks=1 00:15:47.430 --rc geninfo_unexecuted_blocks=1 00:15:47.430 00:15:47.430 ' 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:47.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.430 --rc genhtml_branch_coverage=1 00:15:47.430 --rc genhtml_function_coverage=1 00:15:47.430 --rc genhtml_legend=1 00:15:47.430 --rc geninfo_all_blocks=1 00:15:47.430 --rc geninfo_unexecuted_blocks=1 00:15:47.430 00:15:47.430 ' 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:47.430 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:47.431 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:47.431 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:47.431 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:47.431 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:47.431 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:47.431 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.431 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.431 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.431 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:47.431 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.431 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:47.431 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:47.431 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:47.431 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:47.431 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:47.431 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:47.431 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:47.431 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:47.431 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:47.431 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:47.431 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:47.431 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:47.431 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:47.431 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:47.431 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:47.431 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:47.431 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2694483 00:15:47.431 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2694483' 00:15:47.431 Process pid: 2694483 00:15:47.431 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:47.431 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2694483 00:15:47.431 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:47.431 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2694483 ']' 00:15:47.431 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.431 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:47.431 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.431 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:47.431 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:47.431 [2024-11-20 11:16:40.066453] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:15:47.431 [2024-11-20 11:16:40.066530] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:47.431 [2024-11-20 11:16:40.152779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:47.691 [2024-11-20 11:16:40.186777] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:47.691 [2024-11-20 11:16:40.186812] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:47.691 [2024-11-20 11:16:40.186818] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:47.691 [2024-11-20 11:16:40.186823] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:47.691 [2024-11-20 11:16:40.186827] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:47.691 [2024-11-20 11:16:40.188224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.691 [2024-11-20 11:16:40.188515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.691 [2024-11-20 11:16:40.188516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:48.262 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:48.262 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:15:48.262 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:49.205 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:49.205 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:49.205 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:49.205 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.205 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:49.205 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.205 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:49.205 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:49.205 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.205 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:49.205 malloc0 00:15:49.205 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.205 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:49.205 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.205 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:49.205 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.205 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:49.205 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.205 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:49.466 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.466 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:49.466 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.466 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:49.466 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.466 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:49.466 00:15:49.466 00:15:49.466 CUnit - A unit testing framework for C - Version 2.1-3 00:15:49.466 http://cunit.sourceforge.net/ 00:15:49.466 00:15:49.466 00:15:49.466 Suite: nvme_compliance 00:15:49.466 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 11:16:42.118531] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:49.466 [2024-11-20 11:16:42.119809] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:49.466 [2024-11-20 11:16:42.119820] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:49.466 [2024-11-20 11:16:42.119824] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:49.466 [2024-11-20 11:16:42.121550] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:49.466 passed 00:15:49.466 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 11:16:42.197033] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:49.466 [2024-11-20 11:16:42.202063] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:49.727 passed 00:15:49.727 Test: admin_identify_ns ...[2024-11-20 11:16:42.277515] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:49.727 [2024-11-20 11:16:42.341167] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:49.727 [2024-11-20 11:16:42.349165] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:49.727 [2024-11-20 11:16:42.370247] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:49.727 passed 00:15:49.727 Test: admin_get_features_mandatory_features ...[2024-11-20 11:16:42.442448] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:49.727 [2024-11-20 11:16:42.445463] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:49.987 passed 00:15:49.987 Test: admin_get_features_optional_features ...[2024-11-20 11:16:42.520900] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:49.987 [2024-11-20 11:16:42.524922] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:49.987 passed 00:15:49.987 Test: admin_set_features_number_of_queues ...[2024-11-20 11:16:42.599541] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:49.987 [2024-11-20 11:16:42.707265] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.247 passed 00:15:50.247 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 11:16:42.779497] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.247 [2024-11-20 11:16:42.782516] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.247 passed 00:15:50.247 Test: admin_get_log_page_with_lpo ...[2024-11-20 11:16:42.858194] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.247 [2024-11-20 11:16:42.927169] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:50.247 [2024-11-20 11:16:42.940219] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.247 passed 00:15:50.508 Test: fabric_property_get ...[2024-11-20 11:16:43.015290] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.508 [2024-11-20 11:16:43.016497] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:50.508 [2024-11-20 11:16:43.018316] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.508 passed 00:15:50.508 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 11:16:43.093774] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.508 [2024-11-20 11:16:43.094968] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:50.508 [2024-11-20 11:16:43.096791] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.508 passed 00:15:50.508 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 11:16:43.173530] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.768 [2024-11-20 11:16:43.258164] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:50.768 [2024-11-20 11:16:43.274166] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:50.769 [2024-11-20 11:16:43.279241] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.769 passed 00:15:50.769 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 11:16:43.352440] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.769 [2024-11-20 11:16:43.353635] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:50.769 [2024-11-20 11:16:43.355460] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:50.769 passed 00:15:50.769 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 11:16:43.430150] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:51.029 [2024-11-20 11:16:43.508163] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:51.029 [2024-11-20 11:16:43.532172] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:51.029 [2024-11-20 11:16:43.537234] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:51.029 passed 00:15:51.029 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 11:16:43.611430] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:51.029 [2024-11-20 11:16:43.612633] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:51.029 [2024-11-20 11:16:43.612649] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:51.029 [2024-11-20 11:16:43.614446] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:51.029 passed 00:15:51.029 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 11:16:43.688510] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:51.290 [2024-11-20 11:16:43.784164] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:51.290 [2024-11-20 11:16:43.792163] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:51.290 [2024-11-20 11:16:43.800164] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:51.290 [2024-11-20 11:16:43.808163] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:51.290 [2024-11-20 11:16:43.837239] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:51.290 passed 00:15:51.290 Test: admin_create_io_sq_verify_pc ...[2024-11-20 11:16:43.911439] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:51.290 [2024-11-20 11:16:43.928171] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:51.290 [2024-11-20 11:16:43.945605] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:51.290 passed 00:15:51.290 Test: admin_create_io_qp_max_qps ...[2024-11-20 11:16:44.021026] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:52.673 [2024-11-20 11:16:45.128166] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:52.933 [2024-11-20 11:16:45.510731] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:52.933 passed 00:15:52.933 Test: admin_create_io_sq_shared_cq ...[2024-11-20 11:16:45.586520] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.193 [2024-11-20 11:16:45.721164] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:53.193 [2024-11-20 11:16:45.758204] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.193 passed 00:15:53.193 00:15:53.193 Run Summary: Type Total Ran Passed Failed Inactive 00:15:53.193 suites 1 1 n/a 0 0 00:15:53.193 tests 18 18 18 0 0 00:15:53.193 asserts 360 360 360 0 n/a 00:15:53.193 00:15:53.193 Elapsed time = 1.495 seconds 00:15:53.193 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2694483 00:15:53.193 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2694483 ']' 00:15:53.193 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2694483 00:15:53.193 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:15:53.193 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:53.193 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2694483 00:15:53.193 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:53.193 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:53.194 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2694483' 00:15:53.194 killing process with pid 2694483 00:15:53.194 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2694483 00:15:53.194 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2694483 00:15:53.455 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:53.455 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:53.455 00:15:53.455 real 0m6.212s 00:15:53.455 user 0m17.610s 00:15:53.455 sys 0m0.559s 00:15:53.455 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.455 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:53.455 ************************************ 00:15:53.455 END TEST nvmf_vfio_user_nvme_compliance 00:15:53.455 ************************************ 00:15:53.455 11:16:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:53.455 11:16:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:53.455 11:16:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.455 11:16:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:53.455 ************************************ 00:15:53.455 START TEST nvmf_vfio_user_fuzz 00:15:53.455 ************************************ 00:15:53.455 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:53.455 * Looking for test storage... 00:15:53.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:53.455 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:53.455 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:15:53.455 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:53.717 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:53.717 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:53.717 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:53.717 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:53.717 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:53.717 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:53.717 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:53.717 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:53.717 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:53.717 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:53.717 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:53.717 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:53.717 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:53.717 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:53.717 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:53.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.718 --rc genhtml_branch_coverage=1 00:15:53.718 --rc genhtml_function_coverage=1 00:15:53.718 --rc genhtml_legend=1 00:15:53.718 --rc geninfo_all_blocks=1 00:15:53.718 --rc geninfo_unexecuted_blocks=1 00:15:53.718 00:15:53.718 ' 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:53.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.718 --rc genhtml_branch_coverage=1 00:15:53.718 --rc genhtml_function_coverage=1 00:15:53.718 --rc genhtml_legend=1 00:15:53.718 --rc geninfo_all_blocks=1 00:15:53.718 --rc geninfo_unexecuted_blocks=1 00:15:53.718 00:15:53.718 ' 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:53.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.718 --rc genhtml_branch_coverage=1 00:15:53.718 --rc genhtml_function_coverage=1 00:15:53.718 --rc genhtml_legend=1 00:15:53.718 --rc geninfo_all_blocks=1 00:15:53.718 --rc geninfo_unexecuted_blocks=1 00:15:53.718 00:15:53.718 ' 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:53.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.718 --rc genhtml_branch_coverage=1 00:15:53.718 --rc genhtml_function_coverage=1 00:15:53.718 --rc genhtml_legend=1 00:15:53.718 --rc geninfo_all_blocks=1 00:15:53.718 --rc geninfo_unexecuted_blocks=1 00:15:53.718 00:15:53.718 ' 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:53.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2695707 00:15:53.718 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2695707' 00:15:53.718 Process pid: 2695707 00:15:53.719 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:53.719 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:53.719 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2695707 00:15:53.719 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2695707 ']' 00:15:53.719 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.719 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:53.719 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.719 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:53.719 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:54.662 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:54.662 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:15:54.662 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:55.604 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:55.604 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.604 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:55.604 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.604 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:55.604 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:55.604 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.604 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:55.604 malloc0 00:15:55.604 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.604 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:55.604 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.604 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:55.604 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.604 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:55.604 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.604 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:55.604 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.605 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:55.605 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.605 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:55.605 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.605 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:55.605 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:27.719 Fuzzing completed. Shutting down the fuzz application 00:16:27.719 00:16:27.719 Dumping successful admin opcodes: 00:16:27.719 8, 9, 10, 24, 00:16:27.719 Dumping successful io opcodes: 00:16:27.719 0, 00:16:27.719 NS: 0x20000081ef00 I/O qp, Total commands completed: 1280920, total successful commands: 5021, random_seed: 2071550016 00:16:27.719 NS: 0x20000081ef00 admin qp, Total commands completed: 281440, total successful commands: 2266, random_seed: 905048448 00:16:27.719 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:27.719 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.719 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:27.719 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.719 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2695707 00:16:27.719 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2695707 ']' 00:16:27.719 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2695707 00:16:27.720 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:16:27.720 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.720 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2695707 00:16:27.720 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:27.720 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:27.720 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2695707' 00:16:27.720 killing process with pid 2695707 00:16:27.720 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2695707 00:16:27.720 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2695707 00:16:27.720 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:27.720 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:27.720 00:16:27.720 real 0m32.790s 00:16:27.720 user 0m34.663s 00:16:27.720 sys 0m26.066s 00:16:27.720 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:27.720 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:27.720 ************************************ 00:16:27.720 END TEST nvmf_vfio_user_fuzz 00:16:27.720 ************************************ 00:16:27.720 11:17:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:27.720 11:17:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:27.720 11:17:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:27.720 11:17:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:27.720 ************************************ 00:16:27.720 START TEST nvmf_auth_target 00:16:27.720 ************************************ 00:16:27.720 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:27.720 * Looking for test storage... 00:16:27.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:27.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.720 --rc genhtml_branch_coverage=1 00:16:27.720 --rc genhtml_function_coverage=1 00:16:27.720 --rc genhtml_legend=1 00:16:27.720 --rc geninfo_all_blocks=1 00:16:27.720 --rc geninfo_unexecuted_blocks=1 00:16:27.720 00:16:27.720 ' 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:27.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.720 --rc genhtml_branch_coverage=1 00:16:27.720 --rc genhtml_function_coverage=1 00:16:27.720 --rc genhtml_legend=1 00:16:27.720 --rc geninfo_all_blocks=1 00:16:27.720 --rc geninfo_unexecuted_blocks=1 00:16:27.720 00:16:27.720 ' 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:27.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.720 --rc genhtml_branch_coverage=1 00:16:27.720 --rc genhtml_function_coverage=1 00:16:27.720 --rc genhtml_legend=1 00:16:27.720 --rc geninfo_all_blocks=1 00:16:27.720 --rc geninfo_unexecuted_blocks=1 00:16:27.720 00:16:27.720 ' 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:27.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.720 --rc genhtml_branch_coverage=1 00:16:27.720 --rc genhtml_function_coverage=1 00:16:27.720 --rc genhtml_legend=1 00:16:27.720 --rc geninfo_all_blocks=1 00:16:27.720 --rc geninfo_unexecuted_blocks=1 00:16:27.720 00:16:27.720 ' 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:27.720 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:27.721 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:27.721 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.306 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:34.306 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:34.306 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:34.306 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:34.306 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:34.306 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:34.306 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:34.306 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:34.307 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:34.307 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:34.307 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:34.307 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:34.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:34.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.567 ms 00:16:34.307 00:16:34.307 --- 10.0.0.2 ping statistics --- 00:16:34.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.307 rtt min/avg/max/mdev = 0.567/0.567/0.567/0.000 ms 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:34.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:34.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:16:34.307 00:16:34.307 --- 10.0.0.1 ping statistics --- 00:16:34.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.307 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:34.307 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:34.308 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:34.308 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:34.308 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:34.308 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:34.308 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:34.308 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:34.308 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.308 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2705878 00:16:34.308 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2705878 00:16:34.308 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:34.308 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2705878 ']' 00:16:34.308 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.308 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:34.308 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.308 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:34.308 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.881 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:34.881 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:34.881 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:34.881 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:34.881 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.881 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.881 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2705914 00:16:34.881 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:34.881 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:34.881 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:34.881 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:34.881 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:34.881 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:34.881 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:16:34.881 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:34.881 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:34.881 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b101eb6cc676ee8c20dc0d03fb323156609322a536e44955 00:16:34.881 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:34.881 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.onJ 00:16:34.881 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b101eb6cc676ee8c20dc0d03fb323156609322a536e44955 0 00:16:34.881 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b101eb6cc676ee8c20dc0d03fb323156609322a536e44955 0 00:16:34.881 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:34.881 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:34.881 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b101eb6cc676ee8c20dc0d03fb323156609322a536e44955 00:16:34.881 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:16:34.881 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:35.143 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.onJ 00:16:35.143 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.onJ 00:16:35.143 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.onJ 00:16:35.143 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:35.143 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:35.143 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:35.143 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:35.143 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:35.143 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:35.143 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:35.143 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=51dcbe691f62c644f704186e19bc6a4843f73140da768eac81392713edf9e140 00:16:35.143 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:35.143 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.gve 00:16:35.143 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 51dcbe691f62c644f704186e19bc6a4843f73140da768eac81392713edf9e140 3 00:16:35.143 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 51dcbe691f62c644f704186e19bc6a4843f73140da768eac81392713edf9e140 3 00:16:35.143 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:35.143 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:35.143 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=51dcbe691f62c644f704186e19bc6a4843f73140da768eac81392713edf9e140 00:16:35.143 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:35.143 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:35.143 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.gve 00:16:35.143 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.gve 00:16:35.143 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.gve 00:16:35.143 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:35.143 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bf8b52ed0daa057ea820d873833464ce 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Rqm 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bf8b52ed0daa057ea820d873833464ce 1 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bf8b52ed0daa057ea820d873833464ce 1 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bf8b52ed0daa057ea820d873833464ce 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Rqm 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Rqm 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Rqm 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=653991f81804765b4d382f33fb690aeb984611fa31df9997 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.cVx 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 653991f81804765b4d382f33fb690aeb984611fa31df9997 2 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 653991f81804765b4d382f33fb690aeb984611fa31df9997 2 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=653991f81804765b4d382f33fb690aeb984611fa31df9997 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.cVx 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.cVx 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.cVx 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:35.144 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:35.405 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8e524e8f470ccdacc9e1e89dde3f37108beb014e666095d0 00:16:35.405 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:35.405 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Hkz 00:16:35.405 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8e524e8f470ccdacc9e1e89dde3f37108beb014e666095d0 2 00:16:35.405 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8e524e8f470ccdacc9e1e89dde3f37108beb014e666095d0 2 00:16:35.405 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:35.405 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:35.405 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8e524e8f470ccdacc9e1e89dde3f37108beb014e666095d0 00:16:35.405 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:35.405 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:35.405 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Hkz 00:16:35.405 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Hkz 00:16:35.405 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Hkz 00:16:35.405 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:35.405 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:35.405 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:35.405 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:35.405 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:35.405 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:35.405 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:35.405 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ccd7e881c9704e0e6ef67a057b5ccdf6 00:16:35.405 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:35.405 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.n8O 00:16:35.405 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ccd7e881c9704e0e6ef67a057b5ccdf6 1 00:16:35.405 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ccd7e881c9704e0e6ef67a057b5ccdf6 1 00:16:35.405 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:35.405 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:35.405 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ccd7e881c9704e0e6ef67a057b5ccdf6 00:16:35.405 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:35.405 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:35.405 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.n8O 00:16:35.405 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.n8O 00:16:35.405 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.n8O 00:16:35.405 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:35.405 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:35.405 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:35.405 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:35.405 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:35.405 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:35.406 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:35.406 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2bc43d1760f9031e2566850b40d2a70a93dc045a3a00d31400cf47a781180737 00:16:35.406 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:35.406 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.2Yj 00:16:35.406 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2bc43d1760f9031e2566850b40d2a70a93dc045a3a00d31400cf47a781180737 3 00:16:35.406 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2bc43d1760f9031e2566850b40d2a70a93dc045a3a00d31400cf47a781180737 3 00:16:35.406 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:35.406 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:35.406 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2bc43d1760f9031e2566850b40d2a70a93dc045a3a00d31400cf47a781180737 00:16:35.406 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:35.406 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:35.406 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.2Yj 00:16:35.406 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.2Yj 00:16:35.406 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.2Yj 00:16:35.406 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:35.406 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2705878 00:16:35.406 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2705878 ']' 00:16:35.406 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.406 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:35.406 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.406 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:35.406 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.667 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:35.667 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:35.667 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2705914 /var/tmp/host.sock 00:16:35.667 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2705914 ']' 00:16:35.667 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:16:35.667 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:35.667 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:35.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:35.667 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:35.667 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.927 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:35.927 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:35.927 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:35.927 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.927 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.927 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.927 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:35.927 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.onJ 00:16:35.927 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.927 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.927 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.927 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.onJ 00:16:35.927 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.onJ 00:16:36.188 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.gve ]] 00:16:36.188 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gve 00:16:36.188 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.188 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.188 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.188 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gve 00:16:36.188 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gve 00:16:36.188 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:36.188 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Rqm 00:16:36.188 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.188 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.188 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.188 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Rqm 00:16:36.188 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Rqm 00:16:36.448 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.cVx ]] 00:16:36.448 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cVx 00:16:36.448 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.448 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.448 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.448 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cVx 00:16:36.448 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cVx 00:16:36.715 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:36.715 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Hkz 00:16:36.715 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.715 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.715 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.715 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Hkz 00:16:36.715 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Hkz 00:16:36.976 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.n8O ]] 00:16:36.976 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.n8O 00:16:36.976 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.976 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.976 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.976 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.n8O 00:16:36.976 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.n8O 00:16:36.976 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:36.976 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.2Yj 00:16:36.976 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.976 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.976 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.976 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.2Yj 00:16:36.976 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.2Yj 00:16:37.237 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:37.237 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:37.237 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:37.237 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.237 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:37.237 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:37.499 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:37.499 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.499 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:37.499 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:37.499 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:37.499 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.499 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.499 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.499 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.499 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.499 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.499 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.499 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.760 00:16:37.760 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.760 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.760 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.021 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.021 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.021 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.021 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.021 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.021 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.021 { 00:16:38.021 "cntlid": 1, 00:16:38.021 "qid": 0, 00:16:38.021 "state": "enabled", 00:16:38.021 "thread": "nvmf_tgt_poll_group_000", 00:16:38.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:38.021 "listen_address": { 00:16:38.021 "trtype": "TCP", 00:16:38.021 "adrfam": "IPv4", 00:16:38.021 "traddr": "10.0.0.2", 00:16:38.021 "trsvcid": "4420" 00:16:38.021 }, 00:16:38.021 "peer_address": { 00:16:38.021 "trtype": "TCP", 00:16:38.021 "adrfam": "IPv4", 00:16:38.021 "traddr": "10.0.0.1", 00:16:38.021 "trsvcid": "37360" 00:16:38.021 }, 00:16:38.021 "auth": { 00:16:38.021 "state": "completed", 00:16:38.021 "digest": "sha256", 00:16:38.021 "dhgroup": "null" 00:16:38.021 } 00:16:38.021 } 00:16:38.021 ]' 00:16:38.021 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.021 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.021 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.021 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:38.021 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.021 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.021 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.021 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.282 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:16:38.282 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:16:38.853 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.853 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:38.853 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.853 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.853 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.853 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.853 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:38.853 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:39.124 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:39.124 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.124 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:39.124 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:39.124 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:39.124 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.124 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.124 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.124 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.124 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.124 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.124 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.124 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.385 00:16:39.385 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.385 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.385 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.646 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.646 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.646 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.646 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.646 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.646 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.646 { 00:16:39.646 "cntlid": 3, 00:16:39.646 "qid": 0, 00:16:39.646 "state": "enabled", 00:16:39.646 "thread": "nvmf_tgt_poll_group_000", 00:16:39.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:39.646 "listen_address": { 00:16:39.646 "trtype": "TCP", 00:16:39.646 "adrfam": "IPv4", 00:16:39.646 "traddr": "10.0.0.2", 00:16:39.646 "trsvcid": "4420" 00:16:39.646 }, 00:16:39.646 "peer_address": { 00:16:39.646 "trtype": "TCP", 00:16:39.646 "adrfam": "IPv4", 00:16:39.646 "traddr": "10.0.0.1", 00:16:39.646 "trsvcid": "37382" 00:16:39.646 }, 00:16:39.646 "auth": { 00:16:39.646 "state": "completed", 00:16:39.646 "digest": "sha256", 00:16:39.646 "dhgroup": "null" 00:16:39.646 } 00:16:39.646 } 00:16:39.646 ]' 00:16:39.646 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.646 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.646 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.646 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:39.646 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.646 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.646 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.646 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.906 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: --dhchap-ctrl-secret DHHC-1:02:NjUzOTkxZjgxODA0NzY1YjRkMzgyZjMzZmI2OTBhZWI5ODQ2MTFmYTMxZGY5OTk3lOLrqw==: 00:16:39.906 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: --dhchap-ctrl-secret DHHC-1:02:NjUzOTkxZjgxODA0NzY1YjRkMzgyZjMzZmI2OTBhZWI5ODQ2MTFmYTMxZGY5OTk3lOLrqw==: 00:16:40.477 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.477 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:40.477 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.477 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.477 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.477 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.477 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:40.477 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:40.738 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:40.738 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.738 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:40.738 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:40.738 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:40.738 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.738 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.738 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.738 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.738 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.738 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.738 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.738 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.999 00:16:40.999 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.999 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.999 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.999 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.259 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.259 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.259 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.259 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.259 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.259 { 00:16:41.259 "cntlid": 5, 00:16:41.259 "qid": 0, 00:16:41.259 "state": "enabled", 00:16:41.259 "thread": "nvmf_tgt_poll_group_000", 00:16:41.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:41.259 "listen_address": { 00:16:41.259 "trtype": "TCP", 00:16:41.259 "adrfam": "IPv4", 00:16:41.259 "traddr": "10.0.0.2", 00:16:41.259 "trsvcid": "4420" 00:16:41.259 }, 00:16:41.259 "peer_address": { 00:16:41.259 "trtype": "TCP", 00:16:41.259 "adrfam": "IPv4", 00:16:41.259 "traddr": "10.0.0.1", 00:16:41.259 "trsvcid": "37406" 00:16:41.259 }, 00:16:41.259 "auth": { 00:16:41.259 "state": "completed", 00:16:41.259 "digest": "sha256", 00:16:41.259 "dhgroup": "null" 00:16:41.259 } 00:16:41.259 } 00:16:41.259 ]' 00:16:41.259 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.259 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.259 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.259 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:41.259 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.259 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.259 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.259 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.519 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:01:Y2NkN2U4ODFjOTcwNGUwZTZlZjY3YTA1N2I1Y2NkZjYlDPmE: 00:16:41.519 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:01:Y2NkN2U4ODFjOTcwNGUwZTZlZjY3YTA1N2I1Y2NkZjYlDPmE: 00:16:42.090 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.090 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:42.090 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.090 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.091 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.091 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.091 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:42.091 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:42.351 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:42.351 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.351 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:42.351 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:42.351 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:42.351 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.351 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:42.351 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.351 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.351 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.351 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:42.351 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:42.351 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:42.612 00:16:42.612 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.612 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.612 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.612 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.612 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.612 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.612 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.872 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.872 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.872 { 00:16:42.872 "cntlid": 7, 00:16:42.872 "qid": 0, 00:16:42.872 "state": "enabled", 00:16:42.872 "thread": "nvmf_tgt_poll_group_000", 00:16:42.872 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:42.872 "listen_address": { 00:16:42.872 "trtype": "TCP", 00:16:42.872 "adrfam": "IPv4", 00:16:42.872 "traddr": "10.0.0.2", 00:16:42.872 "trsvcid": "4420" 00:16:42.872 }, 00:16:42.872 "peer_address": { 00:16:42.872 "trtype": "TCP", 00:16:42.872 "adrfam": "IPv4", 00:16:42.872 "traddr": "10.0.0.1", 00:16:42.872 "trsvcid": "56790" 00:16:42.872 }, 00:16:42.872 "auth": { 00:16:42.872 "state": "completed", 00:16:42.872 "digest": "sha256", 00:16:42.872 "dhgroup": "null" 00:16:42.872 } 00:16:42.872 } 00:16:42.872 ]' 00:16:42.872 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.872 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.872 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.872 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:42.872 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.872 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.872 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.872 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.133 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:16:43.133 11:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:16:43.701 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.701 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:43.701 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.701 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.701 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.701 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:43.701 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.701 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:43.701 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:43.960 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:43.960 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.960 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:43.960 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:43.960 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:43.960 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.960 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.960 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.960 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.960 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.960 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.960 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.960 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.222 00:16:44.222 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.222 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.222 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.222 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.222 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.222 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.222 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.222 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.222 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.222 { 00:16:44.222 "cntlid": 9, 00:16:44.222 "qid": 0, 00:16:44.222 "state": "enabled", 00:16:44.222 "thread": "nvmf_tgt_poll_group_000", 00:16:44.222 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:44.222 "listen_address": { 00:16:44.222 "trtype": "TCP", 00:16:44.222 "adrfam": "IPv4", 00:16:44.222 "traddr": "10.0.0.2", 00:16:44.222 "trsvcid": "4420" 00:16:44.222 }, 00:16:44.222 "peer_address": { 00:16:44.222 "trtype": "TCP", 00:16:44.222 "adrfam": "IPv4", 00:16:44.222 "traddr": "10.0.0.1", 00:16:44.222 "trsvcid": "56830" 00:16:44.222 }, 00:16:44.222 "auth": { 00:16:44.222 "state": "completed", 00:16:44.222 "digest": "sha256", 00:16:44.222 "dhgroup": "ffdhe2048" 00:16:44.222 } 00:16:44.222 } 00:16:44.222 ]' 00:16:44.222 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.483 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.483 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.483 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:44.483 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.483 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.483 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.483 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.744 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:16:44.744 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:16:45.315 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.315 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:45.315 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.315 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.315 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.315 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.315 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:45.315 11:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:45.576 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:45.576 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.576 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:45.576 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:45.576 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:45.576 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.576 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.576 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.576 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.576 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.576 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.576 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.576 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.576 00:16:45.576 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.576 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.576 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.837 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.837 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.837 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.837 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.837 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.837 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.837 { 00:16:45.837 "cntlid": 11, 00:16:45.837 "qid": 0, 00:16:45.837 "state": "enabled", 00:16:45.837 "thread": "nvmf_tgt_poll_group_000", 00:16:45.837 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:45.837 "listen_address": { 00:16:45.838 "trtype": "TCP", 00:16:45.838 "adrfam": "IPv4", 00:16:45.838 "traddr": "10.0.0.2", 00:16:45.838 "trsvcid": "4420" 00:16:45.838 }, 00:16:45.838 "peer_address": { 00:16:45.838 "trtype": "TCP", 00:16:45.838 "adrfam": "IPv4", 00:16:45.838 "traddr": "10.0.0.1", 00:16:45.838 "trsvcid": "56868" 00:16:45.838 }, 00:16:45.838 "auth": { 00:16:45.838 "state": "completed", 00:16:45.838 "digest": "sha256", 00:16:45.838 "dhgroup": "ffdhe2048" 00:16:45.838 } 00:16:45.838 } 00:16:45.838 ]' 00:16:45.838 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.838 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.838 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.838 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:45.838 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.098 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.098 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.098 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.098 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: --dhchap-ctrl-secret DHHC-1:02:NjUzOTkxZjgxODA0NzY1YjRkMzgyZjMzZmI2OTBhZWI5ODQ2MTFmYTMxZGY5OTk3lOLrqw==: 00:16:46.098 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: --dhchap-ctrl-secret DHHC-1:02:NjUzOTkxZjgxODA0NzY1YjRkMzgyZjMzZmI2OTBhZWI5ODQ2MTFmYTMxZGY5OTk3lOLrqw==: 00:16:47.039 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.039 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:47.039 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.039 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.039 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.039 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.039 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:47.039 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:47.039 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:47.039 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.039 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:47.039 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:47.039 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:47.039 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.039 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.039 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.039 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.039 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.039 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.039 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.039 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.300 00:16:47.300 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.300 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.300 11:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.560 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.560 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.560 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.560 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.560 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.560 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.560 { 00:16:47.560 "cntlid": 13, 00:16:47.560 "qid": 0, 00:16:47.560 "state": "enabled", 00:16:47.560 "thread": "nvmf_tgt_poll_group_000", 00:16:47.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:47.560 "listen_address": { 00:16:47.560 "trtype": "TCP", 00:16:47.560 "adrfam": "IPv4", 00:16:47.560 "traddr": "10.0.0.2", 00:16:47.560 "trsvcid": "4420" 00:16:47.560 }, 00:16:47.560 "peer_address": { 00:16:47.560 "trtype": "TCP", 00:16:47.560 "adrfam": "IPv4", 00:16:47.560 "traddr": "10.0.0.1", 00:16:47.560 "trsvcid": "56892" 00:16:47.560 }, 00:16:47.560 "auth": { 00:16:47.560 "state": "completed", 00:16:47.560 "digest": "sha256", 00:16:47.560 "dhgroup": "ffdhe2048" 00:16:47.560 } 00:16:47.560 } 00:16:47.560 ]' 00:16:47.560 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.560 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.560 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.560 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:47.560 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.560 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.560 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.560 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.819 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:01:Y2NkN2U4ODFjOTcwNGUwZTZlZjY3YTA1N2I1Y2NkZjYlDPmE: 00:16:47.819 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:01:Y2NkN2U4ODFjOTcwNGUwZTZlZjY3YTA1N2I1Y2NkZjYlDPmE: 00:16:48.389 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.389 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:48.389 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.389 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.389 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.389 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.389 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:48.389 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:48.649 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:48.649 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.649 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:48.649 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:48.649 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:48.649 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.649 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:48.649 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.649 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.649 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.649 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:48.649 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:48.649 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:48.909 00:16:48.909 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.909 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.909 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.170 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.170 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.170 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.170 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.170 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.170 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.170 { 00:16:49.170 "cntlid": 15, 00:16:49.170 "qid": 0, 00:16:49.170 "state": "enabled", 00:16:49.170 "thread": "nvmf_tgt_poll_group_000", 00:16:49.170 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:49.170 "listen_address": { 00:16:49.170 "trtype": "TCP", 00:16:49.170 "adrfam": "IPv4", 00:16:49.170 "traddr": "10.0.0.2", 00:16:49.170 "trsvcid": "4420" 00:16:49.170 }, 00:16:49.170 "peer_address": { 00:16:49.170 "trtype": "TCP", 00:16:49.170 "adrfam": "IPv4", 00:16:49.170 "traddr": "10.0.0.1", 00:16:49.170 "trsvcid": "56936" 00:16:49.170 }, 00:16:49.170 "auth": { 00:16:49.170 "state": "completed", 00:16:49.170 "digest": "sha256", 00:16:49.170 "dhgroup": "ffdhe2048" 00:16:49.170 } 00:16:49.170 } 00:16:49.170 ]' 00:16:49.170 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.170 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.170 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.170 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:49.170 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.170 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.170 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.170 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.431 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:16:49.431 11:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:16:50.002 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.002 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:50.002 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.002 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.002 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.002 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:50.002 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.002 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:50.002 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:50.262 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:50.262 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.262 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:50.262 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:50.262 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:50.262 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.262 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.262 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.262 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.262 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.262 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.262 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.262 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.522 00:16:50.522 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.523 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.523 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.783 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.783 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.783 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.783 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.783 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.783 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.783 { 00:16:50.783 "cntlid": 17, 00:16:50.783 "qid": 0, 00:16:50.783 "state": "enabled", 00:16:50.783 "thread": "nvmf_tgt_poll_group_000", 00:16:50.783 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:50.783 "listen_address": { 00:16:50.783 "trtype": "TCP", 00:16:50.783 "adrfam": "IPv4", 00:16:50.783 "traddr": "10.0.0.2", 00:16:50.783 "trsvcid": "4420" 00:16:50.783 }, 00:16:50.783 "peer_address": { 00:16:50.783 "trtype": "TCP", 00:16:50.783 "adrfam": "IPv4", 00:16:50.783 "traddr": "10.0.0.1", 00:16:50.783 "trsvcid": "56950" 00:16:50.783 }, 00:16:50.783 "auth": { 00:16:50.783 "state": "completed", 00:16:50.783 "digest": "sha256", 00:16:50.783 "dhgroup": "ffdhe3072" 00:16:50.783 } 00:16:50.783 } 00:16:50.783 ]' 00:16:50.783 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.783 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:50.783 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.783 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:50.783 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.783 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.783 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.783 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.043 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:16:51.043 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:16:51.613 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.613 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:51.613 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.613 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.613 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.613 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.613 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:51.613 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:51.873 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:51.873 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.873 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:51.873 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:51.873 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:51.873 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.873 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.873 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.873 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.873 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.873 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.873 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.873 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.134 00:16:52.134 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.134 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.134 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.436 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.436 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.436 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.436 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.436 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.436 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.436 { 00:16:52.436 "cntlid": 19, 00:16:52.436 "qid": 0, 00:16:52.436 "state": "enabled", 00:16:52.436 "thread": "nvmf_tgt_poll_group_000", 00:16:52.436 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:52.436 "listen_address": { 00:16:52.436 "trtype": "TCP", 00:16:52.436 "adrfam": "IPv4", 00:16:52.436 "traddr": "10.0.0.2", 00:16:52.436 "trsvcid": "4420" 00:16:52.436 }, 00:16:52.436 "peer_address": { 00:16:52.436 "trtype": "TCP", 00:16:52.436 "adrfam": "IPv4", 00:16:52.436 "traddr": "10.0.0.1", 00:16:52.436 "trsvcid": "52624" 00:16:52.436 }, 00:16:52.436 "auth": { 00:16:52.436 "state": "completed", 00:16:52.436 "digest": "sha256", 00:16:52.436 "dhgroup": "ffdhe3072" 00:16:52.436 } 00:16:52.436 } 00:16:52.436 ]' 00:16:52.436 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.436 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.436 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.436 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:52.436 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.436 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.436 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.436 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.750 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: --dhchap-ctrl-secret DHHC-1:02:NjUzOTkxZjgxODA0NzY1YjRkMzgyZjMzZmI2OTBhZWI5ODQ2MTFmYTMxZGY5OTk3lOLrqw==: 00:16:52.750 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: --dhchap-ctrl-secret DHHC-1:02:NjUzOTkxZjgxODA0NzY1YjRkMzgyZjMzZmI2OTBhZWI5ODQ2MTFmYTMxZGY5OTk3lOLrqw==: 00:16:53.096 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.389 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:53.389 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.389 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.389 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.389 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.389 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:53.389 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:53.389 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:53.389 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.389 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:53.389 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:53.389 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:53.389 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.389 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.389 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.389 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.389 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.389 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.389 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.389 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.649 00:16:53.649 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.649 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.649 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.910 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.910 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.910 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.910 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.910 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.910 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.910 { 00:16:53.910 "cntlid": 21, 00:16:53.910 "qid": 0, 00:16:53.910 "state": "enabled", 00:16:53.910 "thread": "nvmf_tgt_poll_group_000", 00:16:53.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:53.910 "listen_address": { 00:16:53.910 "trtype": "TCP", 00:16:53.910 "adrfam": "IPv4", 00:16:53.910 "traddr": "10.0.0.2", 00:16:53.910 "trsvcid": "4420" 00:16:53.910 }, 00:16:53.910 "peer_address": { 00:16:53.910 "trtype": "TCP", 00:16:53.910 "adrfam": "IPv4", 00:16:53.910 "traddr": "10.0.0.1", 00:16:53.910 "trsvcid": "52644" 00:16:53.910 }, 00:16:53.910 "auth": { 00:16:53.910 "state": "completed", 00:16:53.910 "digest": "sha256", 00:16:53.910 "dhgroup": "ffdhe3072" 00:16:53.910 } 00:16:53.910 } 00:16:53.910 ]' 00:16:53.910 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.910 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.910 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.910 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:53.910 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.910 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.910 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.910 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.170 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:01:Y2NkN2U4ODFjOTcwNGUwZTZlZjY3YTA1N2I1Y2NkZjYlDPmE: 00:16:54.170 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:01:Y2NkN2U4ODFjOTcwNGUwZTZlZjY3YTA1N2I1Y2NkZjYlDPmE: 00:16:54.741 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.001 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:55.001 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.001 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.001 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.001 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.001 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:55.001 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:55.001 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:55.001 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.001 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:55.001 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:55.001 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:55.001 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.001 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:55.002 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.002 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.002 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.002 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:55.002 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.002 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.262 00:16:55.262 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.262 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.262 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.522 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.522 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.522 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.523 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.523 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.523 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.523 { 00:16:55.523 "cntlid": 23, 00:16:55.523 "qid": 0, 00:16:55.523 "state": "enabled", 00:16:55.523 "thread": "nvmf_tgt_poll_group_000", 00:16:55.523 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:55.523 "listen_address": { 00:16:55.523 "trtype": "TCP", 00:16:55.523 "adrfam": "IPv4", 00:16:55.523 "traddr": "10.0.0.2", 00:16:55.523 "trsvcid": "4420" 00:16:55.523 }, 00:16:55.523 "peer_address": { 00:16:55.523 "trtype": "TCP", 00:16:55.523 "adrfam": "IPv4", 00:16:55.523 "traddr": "10.0.0.1", 00:16:55.523 "trsvcid": "52658" 00:16:55.523 }, 00:16:55.523 "auth": { 00:16:55.523 "state": "completed", 00:16:55.523 "digest": "sha256", 00:16:55.523 "dhgroup": "ffdhe3072" 00:16:55.523 } 00:16:55.523 } 00:16:55.523 ]' 00:16:55.523 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.523 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.523 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.523 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:55.523 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.783 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.783 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.783 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.783 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:16:55.783 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:16:56.355 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.615 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:56.615 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.615 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.615 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.615 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.616 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.616 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:56.616 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:56.616 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:56.616 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.616 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:56.616 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:56.616 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:56.616 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.616 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.616 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.616 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.616 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.616 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.616 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.616 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.875 00:16:56.875 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.875 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.875 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.136 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.136 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.136 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.136 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.136 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.136 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.136 { 00:16:57.136 "cntlid": 25, 00:16:57.136 "qid": 0, 00:16:57.136 "state": "enabled", 00:16:57.136 "thread": "nvmf_tgt_poll_group_000", 00:16:57.136 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:57.136 "listen_address": { 00:16:57.136 "trtype": "TCP", 00:16:57.136 "adrfam": "IPv4", 00:16:57.136 "traddr": "10.0.0.2", 00:16:57.136 "trsvcid": "4420" 00:16:57.136 }, 00:16:57.136 "peer_address": { 00:16:57.136 "trtype": "TCP", 00:16:57.136 "adrfam": "IPv4", 00:16:57.136 "traddr": "10.0.0.1", 00:16:57.136 "trsvcid": "52674" 00:16:57.136 }, 00:16:57.136 "auth": { 00:16:57.136 "state": "completed", 00:16:57.136 "digest": "sha256", 00:16:57.136 "dhgroup": "ffdhe4096" 00:16:57.136 } 00:16:57.136 } 00:16:57.136 ]' 00:16:57.136 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.136 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:57.136 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.136 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:57.136 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.396 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.396 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.396 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.396 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:16:57.396 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:16:57.968 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.229 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:58.229 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.229 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.229 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.229 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.229 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:58.229 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:58.229 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:58.229 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.229 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:58.229 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:58.229 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:58.229 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.229 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.229 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.229 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.229 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.229 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.229 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.229 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.489 00:16:58.489 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.489 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.489 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.750 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.750 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.750 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.750 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.750 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.750 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.750 { 00:16:58.750 "cntlid": 27, 00:16:58.750 "qid": 0, 00:16:58.750 "state": "enabled", 00:16:58.750 "thread": "nvmf_tgt_poll_group_000", 00:16:58.750 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:58.750 "listen_address": { 00:16:58.750 "trtype": "TCP", 00:16:58.750 "adrfam": "IPv4", 00:16:58.750 "traddr": "10.0.0.2", 00:16:58.750 "trsvcid": "4420" 00:16:58.750 }, 00:16:58.750 "peer_address": { 00:16:58.750 "trtype": "TCP", 00:16:58.750 "adrfam": "IPv4", 00:16:58.750 "traddr": "10.0.0.1", 00:16:58.750 "trsvcid": "52704" 00:16:58.750 }, 00:16:58.750 "auth": { 00:16:58.750 "state": "completed", 00:16:58.750 "digest": "sha256", 00:16:58.750 "dhgroup": "ffdhe4096" 00:16:58.750 } 00:16:58.750 } 00:16:58.750 ]' 00:16:58.750 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.750 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:58.750 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.750 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:58.750 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.750 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.750 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.011 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.011 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: --dhchap-ctrl-secret DHHC-1:02:NjUzOTkxZjgxODA0NzY1YjRkMzgyZjMzZmI2OTBhZWI5ODQ2MTFmYTMxZGY5OTk3lOLrqw==: 00:16:59.011 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: --dhchap-ctrl-secret DHHC-1:02:NjUzOTkxZjgxODA0NzY1YjRkMzgyZjMzZmI2OTBhZWI5ODQ2MTFmYTMxZGY5OTk3lOLrqw==: 00:16:59.583 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.583 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:59.583 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.583 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.583 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.583 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.583 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:59.583 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:59.844 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:59.844 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.844 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:59.844 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:59.844 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:59.844 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.844 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.844 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.844 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.844 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.844 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.844 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.844 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.106 00:17:00.106 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.106 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.106 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.367 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.367 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.367 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.367 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.367 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.367 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.367 { 00:17:00.367 "cntlid": 29, 00:17:00.367 "qid": 0, 00:17:00.367 "state": "enabled", 00:17:00.367 "thread": "nvmf_tgt_poll_group_000", 00:17:00.367 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:00.367 "listen_address": { 00:17:00.367 "trtype": "TCP", 00:17:00.367 "adrfam": "IPv4", 00:17:00.367 "traddr": "10.0.0.2", 00:17:00.367 "trsvcid": "4420" 00:17:00.367 }, 00:17:00.367 "peer_address": { 00:17:00.367 "trtype": "TCP", 00:17:00.367 "adrfam": "IPv4", 00:17:00.367 "traddr": "10.0.0.1", 00:17:00.367 "trsvcid": "52728" 00:17:00.367 }, 00:17:00.367 "auth": { 00:17:00.367 "state": "completed", 00:17:00.367 "digest": "sha256", 00:17:00.367 "dhgroup": "ffdhe4096" 00:17:00.367 } 00:17:00.367 } 00:17:00.367 ]' 00:17:00.367 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.367 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.367 11:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.367 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:00.367 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.367 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.367 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.367 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.628 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:01:Y2NkN2U4ODFjOTcwNGUwZTZlZjY3YTA1N2I1Y2NkZjYlDPmE: 00:17:00.628 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:01:Y2NkN2U4ODFjOTcwNGUwZTZlZjY3YTA1N2I1Y2NkZjYlDPmE: 00:17:01.571 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.571 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:01.571 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.572 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.572 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.572 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.572 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:01.572 11:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:01.572 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:17:01.572 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.572 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:01.572 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:01.572 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:01.572 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.572 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:01.572 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.572 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.572 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.572 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:01.572 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.572 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.833 00:17:01.833 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.833 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.833 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.093 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.093 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.093 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.093 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.093 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.093 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.093 { 00:17:02.093 "cntlid": 31, 00:17:02.093 "qid": 0, 00:17:02.093 "state": "enabled", 00:17:02.093 "thread": "nvmf_tgt_poll_group_000", 00:17:02.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:02.094 "listen_address": { 00:17:02.094 "trtype": "TCP", 00:17:02.094 "adrfam": "IPv4", 00:17:02.094 "traddr": "10.0.0.2", 00:17:02.094 "trsvcid": "4420" 00:17:02.094 }, 00:17:02.094 "peer_address": { 00:17:02.094 "trtype": "TCP", 00:17:02.094 "adrfam": "IPv4", 00:17:02.094 "traddr": "10.0.0.1", 00:17:02.094 "trsvcid": "52746" 00:17:02.094 }, 00:17:02.094 "auth": { 00:17:02.094 "state": "completed", 00:17:02.094 "digest": "sha256", 00:17:02.094 "dhgroup": "ffdhe4096" 00:17:02.094 } 00:17:02.094 } 00:17:02.094 ]' 00:17:02.094 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.094 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:02.094 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.094 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:02.094 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.094 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.094 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.094 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.354 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:17:02.354 11:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:17:02.925 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.925 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:02.925 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.925 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.925 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.925 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:02.925 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.925 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:02.925 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:03.186 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:17:03.186 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.186 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:03.186 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:03.186 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:03.186 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.186 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.186 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.186 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.186 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.186 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.186 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.186 11:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.446 00:17:03.446 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.446 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.446 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.707 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.707 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.707 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.707 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.707 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.707 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.707 { 00:17:03.707 "cntlid": 33, 00:17:03.707 "qid": 0, 00:17:03.707 "state": "enabled", 00:17:03.707 "thread": "nvmf_tgt_poll_group_000", 00:17:03.707 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:03.707 "listen_address": { 00:17:03.707 "trtype": "TCP", 00:17:03.707 "adrfam": "IPv4", 00:17:03.707 "traddr": "10.0.0.2", 00:17:03.707 "trsvcid": "4420" 00:17:03.707 }, 00:17:03.707 "peer_address": { 00:17:03.707 "trtype": "TCP", 00:17:03.707 "adrfam": "IPv4", 00:17:03.707 "traddr": "10.0.0.1", 00:17:03.707 "trsvcid": "35152" 00:17:03.707 }, 00:17:03.707 "auth": { 00:17:03.707 "state": "completed", 00:17:03.707 "digest": "sha256", 00:17:03.707 "dhgroup": "ffdhe6144" 00:17:03.707 } 00:17:03.707 } 00:17:03.707 ]' 00:17:03.707 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.707 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:03.707 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.707 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:03.707 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.967 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.967 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.967 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.967 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:17:03.967 11:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:17:04.908 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.908 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:04.908 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.908 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.908 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.908 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.908 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:04.908 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:04.908 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:17:04.908 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.908 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:04.908 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:04.908 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:04.908 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.908 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.908 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.908 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.908 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.908 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.908 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.908 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.170 00:17:05.170 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.170 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.170 11:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.431 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.431 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.431 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.431 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.431 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.431 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.431 { 00:17:05.431 "cntlid": 35, 00:17:05.431 "qid": 0, 00:17:05.431 "state": "enabled", 00:17:05.431 "thread": "nvmf_tgt_poll_group_000", 00:17:05.431 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:05.431 "listen_address": { 00:17:05.431 "trtype": "TCP", 00:17:05.431 "adrfam": "IPv4", 00:17:05.431 "traddr": "10.0.0.2", 00:17:05.431 "trsvcid": "4420" 00:17:05.431 }, 00:17:05.431 "peer_address": { 00:17:05.431 "trtype": "TCP", 00:17:05.431 "adrfam": "IPv4", 00:17:05.431 "traddr": "10.0.0.1", 00:17:05.431 "trsvcid": "35182" 00:17:05.431 }, 00:17:05.431 "auth": { 00:17:05.431 "state": "completed", 00:17:05.431 "digest": "sha256", 00:17:05.431 "dhgroup": "ffdhe6144" 00:17:05.431 } 00:17:05.431 } 00:17:05.431 ]' 00:17:05.431 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.431 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:05.431 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.431 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:05.431 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.691 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.691 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.691 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.691 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: --dhchap-ctrl-secret DHHC-1:02:NjUzOTkxZjgxODA0NzY1YjRkMzgyZjMzZmI2OTBhZWI5ODQ2MTFmYTMxZGY5OTk3lOLrqw==: 00:17:05.691 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: --dhchap-ctrl-secret DHHC-1:02:NjUzOTkxZjgxODA0NzY1YjRkMzgyZjMzZmI2OTBhZWI5ODQ2MTFmYTMxZGY5OTk3lOLrqw==: 00:17:06.261 11:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.523 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:06.523 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.523 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.523 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.523 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.523 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:06.523 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:06.523 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:17:06.523 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.523 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:06.523 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:06.523 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:06.523 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.523 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.523 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.523 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.523 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.523 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.523 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.523 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.092 00:17:07.092 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.092 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.092 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.092 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.092 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.092 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.092 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.092 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.092 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.092 { 00:17:07.092 "cntlid": 37, 00:17:07.092 "qid": 0, 00:17:07.092 "state": "enabled", 00:17:07.092 "thread": "nvmf_tgt_poll_group_000", 00:17:07.092 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:07.092 "listen_address": { 00:17:07.092 "trtype": "TCP", 00:17:07.092 "adrfam": "IPv4", 00:17:07.092 "traddr": "10.0.0.2", 00:17:07.092 "trsvcid": "4420" 00:17:07.093 }, 00:17:07.093 "peer_address": { 00:17:07.093 "trtype": "TCP", 00:17:07.093 "adrfam": "IPv4", 00:17:07.093 "traddr": "10.0.0.1", 00:17:07.093 "trsvcid": "35208" 00:17:07.093 }, 00:17:07.093 "auth": { 00:17:07.093 "state": "completed", 00:17:07.093 "digest": "sha256", 00:17:07.093 "dhgroup": "ffdhe6144" 00:17:07.093 } 00:17:07.093 } 00:17:07.093 ]' 00:17:07.093 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.093 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:07.093 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.354 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:07.354 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.354 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.354 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.354 11:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.354 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:01:Y2NkN2U4ODFjOTcwNGUwZTZlZjY3YTA1N2I1Y2NkZjYlDPmE: 00:17:07.354 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:01:Y2NkN2U4ODFjOTcwNGUwZTZlZjY3YTA1N2I1Y2NkZjYlDPmE: 00:17:08.297 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.297 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:08.297 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.297 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.297 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.297 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.297 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:08.297 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:08.297 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:17:08.297 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.297 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:08.297 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:08.297 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:08.297 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.297 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:08.297 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.297 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.297 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.297 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:08.297 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.297 11:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.559 00:17:08.559 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.559 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.559 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.820 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.820 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.820 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.820 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.820 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.820 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.820 { 00:17:08.820 "cntlid": 39, 00:17:08.820 "qid": 0, 00:17:08.820 "state": "enabled", 00:17:08.820 "thread": "nvmf_tgt_poll_group_000", 00:17:08.820 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:08.820 "listen_address": { 00:17:08.820 "trtype": "TCP", 00:17:08.820 "adrfam": "IPv4", 00:17:08.820 "traddr": "10.0.0.2", 00:17:08.820 "trsvcid": "4420" 00:17:08.820 }, 00:17:08.820 "peer_address": { 00:17:08.820 "trtype": "TCP", 00:17:08.820 "adrfam": "IPv4", 00:17:08.820 "traddr": "10.0.0.1", 00:17:08.820 "trsvcid": "35248" 00:17:08.820 }, 00:17:08.820 "auth": { 00:17:08.820 "state": "completed", 00:17:08.820 "digest": "sha256", 00:17:08.820 "dhgroup": "ffdhe6144" 00:17:08.820 } 00:17:08.820 } 00:17:08.820 ]' 00:17:08.820 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.820 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:08.820 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.820 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:08.820 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.081 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.081 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.081 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.081 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:17:09.081 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:17:09.652 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.912 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:09.912 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.912 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.912 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.912 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:09.912 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.912 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:09.912 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:09.912 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:17:09.912 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.912 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:09.912 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:09.912 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:09.912 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.912 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.912 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.912 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.912 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.912 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.912 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.912 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.485 00:17:10.485 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.485 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.485 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.746 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.746 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.746 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.746 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.746 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.746 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.746 { 00:17:10.746 "cntlid": 41, 00:17:10.746 "qid": 0, 00:17:10.746 "state": "enabled", 00:17:10.746 "thread": "nvmf_tgt_poll_group_000", 00:17:10.746 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:10.746 "listen_address": { 00:17:10.746 "trtype": "TCP", 00:17:10.746 "adrfam": "IPv4", 00:17:10.746 "traddr": "10.0.0.2", 00:17:10.746 "trsvcid": "4420" 00:17:10.746 }, 00:17:10.746 "peer_address": { 00:17:10.746 "trtype": "TCP", 00:17:10.746 "adrfam": "IPv4", 00:17:10.746 "traddr": "10.0.0.1", 00:17:10.746 "trsvcid": "35264" 00:17:10.746 }, 00:17:10.746 "auth": { 00:17:10.746 "state": "completed", 00:17:10.746 "digest": "sha256", 00:17:10.746 "dhgroup": "ffdhe8192" 00:17:10.746 } 00:17:10.746 } 00:17:10.746 ]' 00:17:10.746 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.746 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:10.746 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.746 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:10.746 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.746 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.746 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.746 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.007 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:17:11.007 11:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:17:11.949 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.949 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:11.949 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.949 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.949 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.949 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.949 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:11.949 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:11.949 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:11.949 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.949 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:11.949 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:11.949 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:11.949 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.949 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.949 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.949 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.949 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.949 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.949 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.949 11:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.521 00:17:12.521 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.521 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.521 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.521 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.521 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.521 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.521 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.521 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.521 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.521 { 00:17:12.521 "cntlid": 43, 00:17:12.521 "qid": 0, 00:17:12.521 "state": "enabled", 00:17:12.521 "thread": "nvmf_tgt_poll_group_000", 00:17:12.521 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:12.521 "listen_address": { 00:17:12.521 "trtype": "TCP", 00:17:12.521 "adrfam": "IPv4", 00:17:12.521 "traddr": "10.0.0.2", 00:17:12.521 "trsvcid": "4420" 00:17:12.521 }, 00:17:12.521 "peer_address": { 00:17:12.521 "trtype": "TCP", 00:17:12.521 "adrfam": "IPv4", 00:17:12.521 "traddr": "10.0.0.1", 00:17:12.521 "trsvcid": "34560" 00:17:12.521 }, 00:17:12.521 "auth": { 00:17:12.521 "state": "completed", 00:17:12.521 "digest": "sha256", 00:17:12.521 "dhgroup": "ffdhe8192" 00:17:12.521 } 00:17:12.521 } 00:17:12.521 ]' 00:17:12.521 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.521 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:12.521 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.782 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:12.782 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.782 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.782 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.783 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.043 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: --dhchap-ctrl-secret DHHC-1:02:NjUzOTkxZjgxODA0NzY1YjRkMzgyZjMzZmI2OTBhZWI5ODQ2MTFmYTMxZGY5OTk3lOLrqw==: 00:17:13.043 11:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: --dhchap-ctrl-secret DHHC-1:02:NjUzOTkxZjgxODA0NzY1YjRkMzgyZjMzZmI2OTBhZWI5ODQ2MTFmYTMxZGY5OTk3lOLrqw==: 00:17:13.614 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.614 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:13.614 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.614 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.614 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.614 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.614 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:13.614 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:13.875 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:13.875 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.875 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:13.875 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:13.875 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:13.875 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.875 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.875 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.875 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.875 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.875 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.875 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.875 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.447 00:17:14.447 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.447 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.447 11:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.447 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.447 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.447 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.447 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.447 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.447 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.447 { 00:17:14.447 "cntlid": 45, 00:17:14.447 "qid": 0, 00:17:14.447 "state": "enabled", 00:17:14.447 "thread": "nvmf_tgt_poll_group_000", 00:17:14.447 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:14.447 "listen_address": { 00:17:14.447 "trtype": "TCP", 00:17:14.447 "adrfam": "IPv4", 00:17:14.447 "traddr": "10.0.0.2", 00:17:14.447 "trsvcid": "4420" 00:17:14.447 }, 00:17:14.447 "peer_address": { 00:17:14.447 "trtype": "TCP", 00:17:14.447 "adrfam": "IPv4", 00:17:14.447 "traddr": "10.0.0.1", 00:17:14.447 "trsvcid": "34586" 00:17:14.447 }, 00:17:14.447 "auth": { 00:17:14.447 "state": "completed", 00:17:14.447 "digest": "sha256", 00:17:14.447 "dhgroup": "ffdhe8192" 00:17:14.447 } 00:17:14.447 } 00:17:14.447 ]' 00:17:14.447 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.708 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:14.708 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.708 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:14.708 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.708 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.708 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.708 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.970 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:01:Y2NkN2U4ODFjOTcwNGUwZTZlZjY3YTA1N2I1Y2NkZjYlDPmE: 00:17:14.970 11:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:01:Y2NkN2U4ODFjOTcwNGUwZTZlZjY3YTA1N2I1Y2NkZjYlDPmE: 00:17:15.542 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.542 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:15.542 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.543 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.543 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.543 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.543 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:15.543 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:15.804 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:15.804 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.804 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:15.805 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:15.805 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:15.805 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.805 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:15.805 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.805 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.805 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.805 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:15.805 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:15.805 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.066 00:17:16.326 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.327 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.327 11:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.327 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.327 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.327 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.327 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.327 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.327 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.327 { 00:17:16.327 "cntlid": 47, 00:17:16.327 "qid": 0, 00:17:16.327 "state": "enabled", 00:17:16.327 "thread": "nvmf_tgt_poll_group_000", 00:17:16.327 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:16.327 "listen_address": { 00:17:16.327 "trtype": "TCP", 00:17:16.327 "adrfam": "IPv4", 00:17:16.327 "traddr": "10.0.0.2", 00:17:16.327 "trsvcid": "4420" 00:17:16.327 }, 00:17:16.327 "peer_address": { 00:17:16.327 "trtype": "TCP", 00:17:16.327 "adrfam": "IPv4", 00:17:16.327 "traddr": "10.0.0.1", 00:17:16.327 "trsvcid": "34610" 00:17:16.327 }, 00:17:16.327 "auth": { 00:17:16.327 "state": "completed", 00:17:16.327 "digest": "sha256", 00:17:16.327 "dhgroup": "ffdhe8192" 00:17:16.327 } 00:17:16.327 } 00:17:16.327 ]' 00:17:16.327 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.589 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:16.589 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.589 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:16.589 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.589 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.589 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.589 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.852 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:17:16.852 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:17:17.425 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.425 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:17.425 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.425 11:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.425 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.425 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:17.425 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:17.425 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.425 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:17.425 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:17.686 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:17.686 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.686 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:17.686 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:17.686 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:17.686 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.687 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.687 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.687 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.687 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.687 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.687 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.687 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.687 00:17:17.949 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.949 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.949 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.949 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.949 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.949 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.949 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.949 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.949 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.949 { 00:17:17.949 "cntlid": 49, 00:17:17.949 "qid": 0, 00:17:17.949 "state": "enabled", 00:17:17.949 "thread": "nvmf_tgt_poll_group_000", 00:17:17.949 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:17.949 "listen_address": { 00:17:17.949 "trtype": "TCP", 00:17:17.949 "adrfam": "IPv4", 00:17:17.949 "traddr": "10.0.0.2", 00:17:17.949 "trsvcid": "4420" 00:17:17.949 }, 00:17:17.949 "peer_address": { 00:17:17.949 "trtype": "TCP", 00:17:17.949 "adrfam": "IPv4", 00:17:17.949 "traddr": "10.0.0.1", 00:17:17.949 "trsvcid": "34634" 00:17:17.949 }, 00:17:17.949 "auth": { 00:17:17.949 "state": "completed", 00:17:17.949 "digest": "sha384", 00:17:17.949 "dhgroup": "null" 00:17:17.949 } 00:17:17.949 } 00:17:17.949 ]' 00:17:17.949 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.949 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.949 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.210 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:18.210 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.210 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.210 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.210 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.471 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:17:18.471 11:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:17:19.042 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.042 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:19.042 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.042 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.042 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.042 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.042 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:19.042 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:19.303 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:19.303 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.303 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:19.303 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:19.303 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:19.303 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.303 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.303 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.303 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.303 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.303 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.303 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.303 11:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.303 00:17:19.303 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.303 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.303 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.564 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.564 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.564 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.564 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.564 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.564 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.564 { 00:17:19.564 "cntlid": 51, 00:17:19.564 "qid": 0, 00:17:19.564 "state": "enabled", 00:17:19.564 "thread": "nvmf_tgt_poll_group_000", 00:17:19.564 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:19.564 "listen_address": { 00:17:19.564 "trtype": "TCP", 00:17:19.564 "adrfam": "IPv4", 00:17:19.564 "traddr": "10.0.0.2", 00:17:19.564 "trsvcid": "4420" 00:17:19.564 }, 00:17:19.564 "peer_address": { 00:17:19.564 "trtype": "TCP", 00:17:19.564 "adrfam": "IPv4", 00:17:19.564 "traddr": "10.0.0.1", 00:17:19.564 "trsvcid": "34656" 00:17:19.564 }, 00:17:19.564 "auth": { 00:17:19.564 "state": "completed", 00:17:19.564 "digest": "sha384", 00:17:19.564 "dhgroup": "null" 00:17:19.564 } 00:17:19.564 } 00:17:19.564 ]' 00:17:19.564 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.564 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:19.564 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.564 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:19.825 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.826 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.826 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.826 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.826 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: --dhchap-ctrl-secret DHHC-1:02:NjUzOTkxZjgxODA0NzY1YjRkMzgyZjMzZmI2OTBhZWI5ODQ2MTFmYTMxZGY5OTk3lOLrqw==: 00:17:19.826 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: --dhchap-ctrl-secret DHHC-1:02:NjUzOTkxZjgxODA0NzY1YjRkMzgyZjMzZmI2OTBhZWI5ODQ2MTFmYTMxZGY5OTk3lOLrqw==: 00:17:20.767 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.767 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:20.767 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.767 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.767 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.767 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.767 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:20.767 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:20.767 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:20.767 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.767 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:20.767 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:20.767 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:20.767 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.767 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.767 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.767 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.767 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.767 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.767 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.767 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.029 00:17:21.029 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.029 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.029 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.290 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.290 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.290 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.290 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.290 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.290 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.290 { 00:17:21.290 "cntlid": 53, 00:17:21.290 "qid": 0, 00:17:21.290 "state": "enabled", 00:17:21.290 "thread": "nvmf_tgt_poll_group_000", 00:17:21.290 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:21.290 "listen_address": { 00:17:21.290 "trtype": "TCP", 00:17:21.290 "adrfam": "IPv4", 00:17:21.290 "traddr": "10.0.0.2", 00:17:21.290 "trsvcid": "4420" 00:17:21.290 }, 00:17:21.290 "peer_address": { 00:17:21.290 "trtype": "TCP", 00:17:21.290 "adrfam": "IPv4", 00:17:21.290 "traddr": "10.0.0.1", 00:17:21.290 "trsvcid": "34682" 00:17:21.290 }, 00:17:21.290 "auth": { 00:17:21.290 "state": "completed", 00:17:21.290 "digest": "sha384", 00:17:21.290 "dhgroup": "null" 00:17:21.290 } 00:17:21.290 } 00:17:21.290 ]' 00:17:21.290 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.290 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.290 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.290 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:21.290 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.290 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.290 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.290 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.551 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:01:Y2NkN2U4ODFjOTcwNGUwZTZlZjY3YTA1N2I1Y2NkZjYlDPmE: 00:17:21.551 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:01:Y2NkN2U4ODFjOTcwNGUwZTZlZjY3YTA1N2I1Y2NkZjYlDPmE: 00:17:22.122 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.122 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:22.122 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.122 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.122 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.122 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.122 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:22.122 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:22.425 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:22.425 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.425 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:22.425 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:22.425 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:22.425 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.425 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:22.425 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.426 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.426 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.426 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:22.426 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:22.426 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:22.735 00:17:22.735 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.735 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.735 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.735 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.735 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.735 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.735 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.735 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.735 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.735 { 00:17:22.735 "cntlid": 55, 00:17:22.735 "qid": 0, 00:17:22.735 "state": "enabled", 00:17:22.735 "thread": "nvmf_tgt_poll_group_000", 00:17:22.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:22.735 "listen_address": { 00:17:22.735 "trtype": "TCP", 00:17:22.735 "adrfam": "IPv4", 00:17:22.735 "traddr": "10.0.0.2", 00:17:22.735 "trsvcid": "4420" 00:17:22.735 }, 00:17:22.735 "peer_address": { 00:17:22.735 "trtype": "TCP", 00:17:22.735 "adrfam": "IPv4", 00:17:22.735 "traddr": "10.0.0.1", 00:17:22.735 "trsvcid": "32824" 00:17:22.735 }, 00:17:22.735 "auth": { 00:17:22.735 "state": "completed", 00:17:22.735 "digest": "sha384", 00:17:22.735 "dhgroup": "null" 00:17:22.735 } 00:17:22.735 } 00:17:22.735 ]' 00:17:22.735 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.735 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.997 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.997 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:22.997 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.997 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.997 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.997 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.257 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:17:23.257 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:17:23.828 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.828 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:23.828 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.828 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.828 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.828 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:23.828 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.828 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:23.828 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:24.088 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:24.088 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.088 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:24.088 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:24.088 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:24.088 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.088 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.089 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.089 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.089 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.089 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.089 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.089 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.089 00:17:24.350 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.350 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.350 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.350 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.350 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.350 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.350 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.350 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.350 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.350 { 00:17:24.350 "cntlid": 57, 00:17:24.350 "qid": 0, 00:17:24.350 "state": "enabled", 00:17:24.350 "thread": "nvmf_tgt_poll_group_000", 00:17:24.350 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:24.350 "listen_address": { 00:17:24.350 "trtype": "TCP", 00:17:24.350 "adrfam": "IPv4", 00:17:24.350 "traddr": "10.0.0.2", 00:17:24.350 "trsvcid": "4420" 00:17:24.350 }, 00:17:24.350 "peer_address": { 00:17:24.350 "trtype": "TCP", 00:17:24.350 "adrfam": "IPv4", 00:17:24.350 "traddr": "10.0.0.1", 00:17:24.350 "trsvcid": "32842" 00:17:24.350 }, 00:17:24.350 "auth": { 00:17:24.350 "state": "completed", 00:17:24.350 "digest": "sha384", 00:17:24.350 "dhgroup": "ffdhe2048" 00:17:24.350 } 00:17:24.350 } 00:17:24.350 ]' 00:17:24.350 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.350 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.350 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.612 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:24.612 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.612 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.612 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.612 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.612 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:17:24.612 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:17:25.583 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.583 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:25.583 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.583 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.583 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.583 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.583 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:25.583 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:25.583 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:25.583 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.583 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:25.583 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:25.583 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:25.583 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.583 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.583 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.583 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.583 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.583 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.583 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.583 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.844 00:17:25.844 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.844 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.844 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.106 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.106 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.106 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.106 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.106 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.106 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.106 { 00:17:26.106 "cntlid": 59, 00:17:26.106 "qid": 0, 00:17:26.106 "state": "enabled", 00:17:26.106 "thread": "nvmf_tgt_poll_group_000", 00:17:26.106 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:26.106 "listen_address": { 00:17:26.106 "trtype": "TCP", 00:17:26.106 "adrfam": "IPv4", 00:17:26.106 "traddr": "10.0.0.2", 00:17:26.106 "trsvcid": "4420" 00:17:26.106 }, 00:17:26.106 "peer_address": { 00:17:26.106 "trtype": "TCP", 00:17:26.106 "adrfam": "IPv4", 00:17:26.106 "traddr": "10.0.0.1", 00:17:26.106 "trsvcid": "32862" 00:17:26.106 }, 00:17:26.106 "auth": { 00:17:26.106 "state": "completed", 00:17:26.106 "digest": "sha384", 00:17:26.106 "dhgroup": "ffdhe2048" 00:17:26.106 } 00:17:26.106 } 00:17:26.106 ]' 00:17:26.106 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.106 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.106 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.106 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:26.106 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.106 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.106 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.106 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.367 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: --dhchap-ctrl-secret DHHC-1:02:NjUzOTkxZjgxODA0NzY1YjRkMzgyZjMzZmI2OTBhZWI5ODQ2MTFmYTMxZGY5OTk3lOLrqw==: 00:17:26.367 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: --dhchap-ctrl-secret DHHC-1:02:NjUzOTkxZjgxODA0NzY1YjRkMzgyZjMzZmI2OTBhZWI5ODQ2MTFmYTMxZGY5OTk3lOLrqw==: 00:17:26.938 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.938 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:26.938 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.938 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.938 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.938 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.938 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:26.938 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:27.199 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:27.199 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.199 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:27.199 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:27.199 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:27.199 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.199 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.199 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.199 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.199 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.199 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.199 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.199 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.460 00:17:27.460 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.460 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.460 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.722 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.722 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.722 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.722 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.722 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.722 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.722 { 00:17:27.722 "cntlid": 61, 00:17:27.722 "qid": 0, 00:17:27.722 "state": "enabled", 00:17:27.722 "thread": "nvmf_tgt_poll_group_000", 00:17:27.722 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:27.722 "listen_address": { 00:17:27.722 "trtype": "TCP", 00:17:27.722 "adrfam": "IPv4", 00:17:27.722 "traddr": "10.0.0.2", 00:17:27.722 "trsvcid": "4420" 00:17:27.722 }, 00:17:27.722 "peer_address": { 00:17:27.722 "trtype": "TCP", 00:17:27.722 "adrfam": "IPv4", 00:17:27.722 "traddr": "10.0.0.1", 00:17:27.722 "trsvcid": "32896" 00:17:27.722 }, 00:17:27.722 "auth": { 00:17:27.722 "state": "completed", 00:17:27.722 "digest": "sha384", 00:17:27.722 "dhgroup": "ffdhe2048" 00:17:27.722 } 00:17:27.722 } 00:17:27.722 ]' 00:17:27.722 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.722 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.722 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.722 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:27.722 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.722 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.722 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.722 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.983 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:01:Y2NkN2U4ODFjOTcwNGUwZTZlZjY3YTA1N2I1Y2NkZjYlDPmE: 00:17:27.983 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:01:Y2NkN2U4ODFjOTcwNGUwZTZlZjY3YTA1N2I1Y2NkZjYlDPmE: 00:17:28.555 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.815 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:28.815 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.815 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.815 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.815 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.815 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:28.815 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:28.815 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:28.815 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.815 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:28.815 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:28.815 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:28.815 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.815 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:28.815 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.815 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.815 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.815 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:28.815 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:28.815 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:29.077 00:17:29.077 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.077 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.077 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.338 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.338 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.338 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.338 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.338 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.338 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.338 { 00:17:29.338 "cntlid": 63, 00:17:29.338 "qid": 0, 00:17:29.338 "state": "enabled", 00:17:29.338 "thread": "nvmf_tgt_poll_group_000", 00:17:29.338 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:29.338 "listen_address": { 00:17:29.338 "trtype": "TCP", 00:17:29.338 "adrfam": "IPv4", 00:17:29.338 "traddr": "10.0.0.2", 00:17:29.338 "trsvcid": "4420" 00:17:29.338 }, 00:17:29.338 "peer_address": { 00:17:29.338 "trtype": "TCP", 00:17:29.338 "adrfam": "IPv4", 00:17:29.338 "traddr": "10.0.0.1", 00:17:29.338 "trsvcid": "32926" 00:17:29.338 }, 00:17:29.338 "auth": { 00:17:29.338 "state": "completed", 00:17:29.338 "digest": "sha384", 00:17:29.338 "dhgroup": "ffdhe2048" 00:17:29.338 } 00:17:29.338 } 00:17:29.338 ]' 00:17:29.338 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.338 11:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:29.338 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.338 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:29.338 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.338 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.338 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.338 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.599 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:17:29.599 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:17:30.172 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.432 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:30.432 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.432 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.432 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.432 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:30.432 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.432 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:30.432 11:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:30.432 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:30.432 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.432 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:30.432 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:30.432 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:30.432 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.432 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.432 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.432 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.432 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.432 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.432 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.432 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.692 00:17:30.692 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.692 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.692 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.970 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.970 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.970 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.970 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.970 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.970 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.970 { 00:17:30.970 "cntlid": 65, 00:17:30.970 "qid": 0, 00:17:30.970 "state": "enabled", 00:17:30.970 "thread": "nvmf_tgt_poll_group_000", 00:17:30.970 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:30.970 "listen_address": { 00:17:30.970 "trtype": "TCP", 00:17:30.970 "adrfam": "IPv4", 00:17:30.970 "traddr": "10.0.0.2", 00:17:30.970 "trsvcid": "4420" 00:17:30.970 }, 00:17:30.970 "peer_address": { 00:17:30.970 "trtype": "TCP", 00:17:30.970 "adrfam": "IPv4", 00:17:30.970 "traddr": "10.0.0.1", 00:17:30.970 "trsvcid": "32948" 00:17:30.970 }, 00:17:30.970 "auth": { 00:17:30.970 "state": "completed", 00:17:30.970 "digest": "sha384", 00:17:30.970 "dhgroup": "ffdhe3072" 00:17:30.970 } 00:17:30.970 } 00:17:30.970 ]' 00:17:30.970 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.970 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.970 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.970 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:30.970 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.970 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.970 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.970 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.232 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:17:31.232 11:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:17:31.805 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.805 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:31.805 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.805 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.805 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.805 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.805 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:31.805 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:32.065 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:32.065 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.066 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:32.066 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:32.066 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:32.066 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.066 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.066 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.066 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.066 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.066 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.066 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.066 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.326 00:17:32.326 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.326 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.326 11:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.586 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.586 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.586 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.587 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.587 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.587 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.587 { 00:17:32.587 "cntlid": 67, 00:17:32.587 "qid": 0, 00:17:32.587 "state": "enabled", 00:17:32.587 "thread": "nvmf_tgt_poll_group_000", 00:17:32.587 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:32.587 "listen_address": { 00:17:32.587 "trtype": "TCP", 00:17:32.587 "adrfam": "IPv4", 00:17:32.587 "traddr": "10.0.0.2", 00:17:32.587 "trsvcid": "4420" 00:17:32.587 }, 00:17:32.587 "peer_address": { 00:17:32.587 "trtype": "TCP", 00:17:32.587 "adrfam": "IPv4", 00:17:32.587 "traddr": "10.0.0.1", 00:17:32.587 "trsvcid": "44406" 00:17:32.587 }, 00:17:32.587 "auth": { 00:17:32.587 "state": "completed", 00:17:32.587 "digest": "sha384", 00:17:32.587 "dhgroup": "ffdhe3072" 00:17:32.587 } 00:17:32.587 } 00:17:32.587 ]' 00:17:32.587 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.587 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:32.587 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.587 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:32.587 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.587 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.587 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.587 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.848 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: --dhchap-ctrl-secret DHHC-1:02:NjUzOTkxZjgxODA0NzY1YjRkMzgyZjMzZmI2OTBhZWI5ODQ2MTFmYTMxZGY5OTk3lOLrqw==: 00:17:32.848 11:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: --dhchap-ctrl-secret DHHC-1:02:NjUzOTkxZjgxODA0NzY1YjRkMzgyZjMzZmI2OTBhZWI5ODQ2MTFmYTMxZGY5OTk3lOLrqw==: 00:17:33.421 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.682 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:33.682 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.682 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.682 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.682 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.682 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:33.682 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:33.682 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:33.682 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.682 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:33.682 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:33.682 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:33.682 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.682 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.682 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.682 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.682 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.682 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.682 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.682 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.942 00:17:33.942 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.942 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.942 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.203 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.203 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.203 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.203 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.203 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.203 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.203 { 00:17:34.203 "cntlid": 69, 00:17:34.203 "qid": 0, 00:17:34.203 "state": "enabled", 00:17:34.203 "thread": "nvmf_tgt_poll_group_000", 00:17:34.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:34.203 "listen_address": { 00:17:34.203 "trtype": "TCP", 00:17:34.203 "adrfam": "IPv4", 00:17:34.203 "traddr": "10.0.0.2", 00:17:34.203 "trsvcid": "4420" 00:17:34.203 }, 00:17:34.203 "peer_address": { 00:17:34.203 "trtype": "TCP", 00:17:34.203 "adrfam": "IPv4", 00:17:34.203 "traddr": "10.0.0.1", 00:17:34.203 "trsvcid": "44430" 00:17:34.203 }, 00:17:34.203 "auth": { 00:17:34.203 "state": "completed", 00:17:34.203 "digest": "sha384", 00:17:34.203 "dhgroup": "ffdhe3072" 00:17:34.203 } 00:17:34.203 } 00:17:34.203 ]' 00:17:34.203 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.203 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.203 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.203 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:34.203 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.203 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.203 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.203 11:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.463 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:01:Y2NkN2U4ODFjOTcwNGUwZTZlZjY3YTA1N2I1Y2NkZjYlDPmE: 00:17:34.463 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:01:Y2NkN2U4ODFjOTcwNGUwZTZlZjY3YTA1N2I1Y2NkZjYlDPmE: 00:17:35.034 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.295 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:35.295 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.295 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.295 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.295 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.295 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:35.295 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:35.295 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:35.295 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.295 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:35.295 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:35.295 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:35.295 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.295 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:35.295 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.295 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.295 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.295 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:35.295 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.295 11:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.556 00:17:35.556 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.556 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.556 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.817 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.817 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.817 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.817 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.817 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.817 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.817 { 00:17:35.817 "cntlid": 71, 00:17:35.817 "qid": 0, 00:17:35.817 "state": "enabled", 00:17:35.817 "thread": "nvmf_tgt_poll_group_000", 00:17:35.817 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:35.817 "listen_address": { 00:17:35.817 "trtype": "TCP", 00:17:35.817 "adrfam": "IPv4", 00:17:35.817 "traddr": "10.0.0.2", 00:17:35.817 "trsvcid": "4420" 00:17:35.817 }, 00:17:35.817 "peer_address": { 00:17:35.817 "trtype": "TCP", 00:17:35.817 "adrfam": "IPv4", 00:17:35.817 "traddr": "10.0.0.1", 00:17:35.817 "trsvcid": "44454" 00:17:35.817 }, 00:17:35.817 "auth": { 00:17:35.817 "state": "completed", 00:17:35.817 "digest": "sha384", 00:17:35.817 "dhgroup": "ffdhe3072" 00:17:35.817 } 00:17:35.817 } 00:17:35.817 ]' 00:17:35.817 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.817 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:35.817 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.817 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:35.817 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.817 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.817 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.817 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.078 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:17:36.078 11:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:17:36.657 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.657 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:36.657 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.657 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.657 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.657 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:36.657 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.657 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:36.657 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:36.923 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:36.923 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.923 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:36.923 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:36.923 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:36.923 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.923 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.923 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.923 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.923 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.923 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.923 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.923 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.184 00:17:37.184 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.184 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.184 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.445 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.445 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.445 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.445 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.445 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.445 11:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.445 { 00:17:37.445 "cntlid": 73, 00:17:37.445 "qid": 0, 00:17:37.445 "state": "enabled", 00:17:37.445 "thread": "nvmf_tgt_poll_group_000", 00:17:37.445 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:37.445 "listen_address": { 00:17:37.445 "trtype": "TCP", 00:17:37.445 "adrfam": "IPv4", 00:17:37.445 "traddr": "10.0.0.2", 00:17:37.445 "trsvcid": "4420" 00:17:37.445 }, 00:17:37.445 "peer_address": { 00:17:37.445 "trtype": "TCP", 00:17:37.445 "adrfam": "IPv4", 00:17:37.445 "traddr": "10.0.0.1", 00:17:37.445 "trsvcid": "44470" 00:17:37.445 }, 00:17:37.445 "auth": { 00:17:37.445 "state": "completed", 00:17:37.445 "digest": "sha384", 00:17:37.445 "dhgroup": "ffdhe4096" 00:17:37.445 } 00:17:37.445 } 00:17:37.445 ]' 00:17:37.445 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.445 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:37.445 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.445 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:37.445 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.445 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.445 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.445 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.706 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:17:37.706 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:17:38.278 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.278 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:38.278 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.278 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.278 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.278 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.278 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:38.278 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:38.541 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:38.541 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.541 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:38.541 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:38.541 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:38.541 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.541 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.541 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.541 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.541 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.541 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.541 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.541 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.804 00:17:38.804 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.804 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.804 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.065 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.065 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.065 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.065 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.065 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.065 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.065 { 00:17:39.065 "cntlid": 75, 00:17:39.065 "qid": 0, 00:17:39.065 "state": "enabled", 00:17:39.065 "thread": "nvmf_tgt_poll_group_000", 00:17:39.065 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:39.065 "listen_address": { 00:17:39.065 "trtype": "TCP", 00:17:39.065 "adrfam": "IPv4", 00:17:39.065 "traddr": "10.0.0.2", 00:17:39.065 "trsvcid": "4420" 00:17:39.065 }, 00:17:39.065 "peer_address": { 00:17:39.065 "trtype": "TCP", 00:17:39.065 "adrfam": "IPv4", 00:17:39.065 "traddr": "10.0.0.1", 00:17:39.065 "trsvcid": "44502" 00:17:39.065 }, 00:17:39.065 "auth": { 00:17:39.065 "state": "completed", 00:17:39.065 "digest": "sha384", 00:17:39.065 "dhgroup": "ffdhe4096" 00:17:39.065 } 00:17:39.065 } 00:17:39.065 ]' 00:17:39.065 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.065 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.065 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.065 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:39.065 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.065 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.065 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.065 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.326 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: --dhchap-ctrl-secret DHHC-1:02:NjUzOTkxZjgxODA0NzY1YjRkMzgyZjMzZmI2OTBhZWI5ODQ2MTFmYTMxZGY5OTk3lOLrqw==: 00:17:39.326 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: --dhchap-ctrl-secret DHHC-1:02:NjUzOTkxZjgxODA0NzY1YjRkMzgyZjMzZmI2OTBhZWI5ODQ2MTFmYTMxZGY5OTk3lOLrqw==: 00:17:39.897 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.897 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:39.897 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.897 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.897 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.897 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.897 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:39.897 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:40.158 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:40.158 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.158 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:40.158 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:40.158 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:40.158 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.158 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.158 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.158 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.158 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.158 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.158 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.158 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.419 00:17:40.419 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.419 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.419 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.679 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.679 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.679 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.679 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.679 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.679 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.679 { 00:17:40.679 "cntlid": 77, 00:17:40.679 "qid": 0, 00:17:40.679 "state": "enabled", 00:17:40.679 "thread": "nvmf_tgt_poll_group_000", 00:17:40.679 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:40.679 "listen_address": { 00:17:40.679 "trtype": "TCP", 00:17:40.679 "adrfam": "IPv4", 00:17:40.679 "traddr": "10.0.0.2", 00:17:40.679 "trsvcid": "4420" 00:17:40.679 }, 00:17:40.679 "peer_address": { 00:17:40.679 "trtype": "TCP", 00:17:40.679 "adrfam": "IPv4", 00:17:40.679 "traddr": "10.0.0.1", 00:17:40.679 "trsvcid": "44538" 00:17:40.679 }, 00:17:40.680 "auth": { 00:17:40.680 "state": "completed", 00:17:40.680 "digest": "sha384", 00:17:40.680 "dhgroup": "ffdhe4096" 00:17:40.680 } 00:17:40.680 } 00:17:40.680 ]' 00:17:40.680 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.680 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:40.680 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.680 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:40.680 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.680 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.680 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.680 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.940 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:01:Y2NkN2U4ODFjOTcwNGUwZTZlZjY3YTA1N2I1Y2NkZjYlDPmE: 00:17:40.940 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:01:Y2NkN2U4ODFjOTcwNGUwZTZlZjY3YTA1N2I1Y2NkZjYlDPmE: 00:17:41.510 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.510 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:41.510 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.510 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.510 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.510 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.510 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:41.510 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:41.771 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:41.771 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.771 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:41.771 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:41.771 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:41.771 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.771 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:41.771 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.771 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.771 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.771 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:41.771 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:41.771 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.031 00:17:42.031 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.031 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.031 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.290 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.290 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.290 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.290 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.291 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.291 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.291 { 00:17:42.291 "cntlid": 79, 00:17:42.291 "qid": 0, 00:17:42.291 "state": "enabled", 00:17:42.291 "thread": "nvmf_tgt_poll_group_000", 00:17:42.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:42.291 "listen_address": { 00:17:42.291 "trtype": "TCP", 00:17:42.291 "adrfam": "IPv4", 00:17:42.291 "traddr": "10.0.0.2", 00:17:42.291 "trsvcid": "4420" 00:17:42.291 }, 00:17:42.291 "peer_address": { 00:17:42.291 "trtype": "TCP", 00:17:42.291 "adrfam": "IPv4", 00:17:42.291 "traddr": "10.0.0.1", 00:17:42.291 "trsvcid": "44558" 00:17:42.291 }, 00:17:42.291 "auth": { 00:17:42.291 "state": "completed", 00:17:42.291 "digest": "sha384", 00:17:42.291 "dhgroup": "ffdhe4096" 00:17:42.291 } 00:17:42.291 } 00:17:42.291 ]' 00:17:42.291 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.291 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:42.291 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.291 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:42.291 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.291 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.291 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.291 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.551 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:17:42.551 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:17:43.120 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.120 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:43.120 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.120 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.120 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.120 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:43.120 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.120 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:43.120 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:43.381 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:43.381 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.381 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:43.381 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:43.381 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:43.381 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.381 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.381 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.381 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.381 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.381 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.381 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.381 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.642 00:17:43.642 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.642 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.642 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.903 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.903 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.903 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.903 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.903 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.903 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.903 { 00:17:43.903 "cntlid": 81, 00:17:43.903 "qid": 0, 00:17:43.903 "state": "enabled", 00:17:43.903 "thread": "nvmf_tgt_poll_group_000", 00:17:43.903 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:43.903 "listen_address": { 00:17:43.903 "trtype": "TCP", 00:17:43.903 "adrfam": "IPv4", 00:17:43.903 "traddr": "10.0.0.2", 00:17:43.903 "trsvcid": "4420" 00:17:43.903 }, 00:17:43.903 "peer_address": { 00:17:43.903 "trtype": "TCP", 00:17:43.903 "adrfam": "IPv4", 00:17:43.903 "traddr": "10.0.0.1", 00:17:43.903 "trsvcid": "46724" 00:17:43.903 }, 00:17:43.903 "auth": { 00:17:43.903 "state": "completed", 00:17:43.903 "digest": "sha384", 00:17:43.903 "dhgroup": "ffdhe6144" 00:17:43.903 } 00:17:43.903 } 00:17:43.903 ]' 00:17:43.903 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.903 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:43.903 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.164 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:44.164 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.164 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.164 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.164 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.164 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:17:44.164 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:17:45.104 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.104 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:45.104 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.104 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.104 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.104 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.104 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:45.104 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:45.104 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:45.104 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.104 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:45.104 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:45.104 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:45.104 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.104 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.104 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.104 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.104 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.104 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.104 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.104 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.366 00:17:45.366 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.366 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.366 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.628 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.628 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.628 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.628 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.628 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.628 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.628 { 00:17:45.628 "cntlid": 83, 00:17:45.628 "qid": 0, 00:17:45.628 "state": "enabled", 00:17:45.628 "thread": "nvmf_tgt_poll_group_000", 00:17:45.628 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:45.628 "listen_address": { 00:17:45.628 "trtype": "TCP", 00:17:45.628 "adrfam": "IPv4", 00:17:45.628 "traddr": "10.0.0.2", 00:17:45.628 "trsvcid": "4420" 00:17:45.628 }, 00:17:45.628 "peer_address": { 00:17:45.628 "trtype": "TCP", 00:17:45.628 "adrfam": "IPv4", 00:17:45.628 "traddr": "10.0.0.1", 00:17:45.628 "trsvcid": "46750" 00:17:45.628 }, 00:17:45.628 "auth": { 00:17:45.628 "state": "completed", 00:17:45.628 "digest": "sha384", 00:17:45.628 "dhgroup": "ffdhe6144" 00:17:45.628 } 00:17:45.628 } 00:17:45.628 ]' 00:17:45.628 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.628 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:45.628 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.628 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:45.893 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.893 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.893 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.893 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.893 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: --dhchap-ctrl-secret DHHC-1:02:NjUzOTkxZjgxODA0NzY1YjRkMzgyZjMzZmI2OTBhZWI5ODQ2MTFmYTMxZGY5OTk3lOLrqw==: 00:17:45.893 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: --dhchap-ctrl-secret DHHC-1:02:NjUzOTkxZjgxODA0NzY1YjRkMzgyZjMzZmI2OTBhZWI5ODQ2MTFmYTMxZGY5OTk3lOLrqw==: 00:17:46.834 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.834 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.834 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:46.834 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.834 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.834 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.834 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.834 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:46.834 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:46.834 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:46.834 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.834 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:46.834 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:46.834 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:46.834 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.834 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.834 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.834 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.834 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.834 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.834 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.834 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.127 00:17:47.127 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.127 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.127 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.386 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.386 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.386 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.386 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.386 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.386 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.386 { 00:17:47.386 "cntlid": 85, 00:17:47.386 "qid": 0, 00:17:47.386 "state": "enabled", 00:17:47.386 "thread": "nvmf_tgt_poll_group_000", 00:17:47.386 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:47.386 "listen_address": { 00:17:47.386 "trtype": "TCP", 00:17:47.386 "adrfam": "IPv4", 00:17:47.386 "traddr": "10.0.0.2", 00:17:47.386 "trsvcid": "4420" 00:17:47.386 }, 00:17:47.386 "peer_address": { 00:17:47.386 "trtype": "TCP", 00:17:47.386 "adrfam": "IPv4", 00:17:47.386 "traddr": "10.0.0.1", 00:17:47.386 "trsvcid": "46766" 00:17:47.386 }, 00:17:47.386 "auth": { 00:17:47.386 "state": "completed", 00:17:47.386 "digest": "sha384", 00:17:47.386 "dhgroup": "ffdhe6144" 00:17:47.386 } 00:17:47.386 } 00:17:47.386 ]' 00:17:47.386 11:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.386 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:47.386 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.386 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:47.386 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.647 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.647 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.647 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.647 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:01:Y2NkN2U4ODFjOTcwNGUwZTZlZjY3YTA1N2I1Y2NkZjYlDPmE: 00:17:47.647 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:01:Y2NkN2U4ODFjOTcwNGUwZTZlZjY3YTA1N2I1Y2NkZjYlDPmE: 00:17:48.217 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.478 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:48.478 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.478 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.478 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.478 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.478 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:48.478 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:48.478 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:48.478 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.478 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:48.478 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:48.478 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:48.478 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.478 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:48.478 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.478 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.478 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.478 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:48.478 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:48.478 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:48.738 00:17:48.997 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.997 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.997 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.997 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.997 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.997 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.997 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.997 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.997 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.997 { 00:17:48.997 "cntlid": 87, 00:17:48.997 "qid": 0, 00:17:48.997 "state": "enabled", 00:17:48.997 "thread": "nvmf_tgt_poll_group_000", 00:17:48.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:48.997 "listen_address": { 00:17:48.997 "trtype": "TCP", 00:17:48.997 "adrfam": "IPv4", 00:17:48.997 "traddr": "10.0.0.2", 00:17:48.997 "trsvcid": "4420" 00:17:48.997 }, 00:17:48.997 "peer_address": { 00:17:48.997 "trtype": "TCP", 00:17:48.997 "adrfam": "IPv4", 00:17:48.997 "traddr": "10.0.0.1", 00:17:48.997 "trsvcid": "46786" 00:17:48.997 }, 00:17:48.997 "auth": { 00:17:48.997 "state": "completed", 00:17:48.997 "digest": "sha384", 00:17:48.997 "dhgroup": "ffdhe6144" 00:17:48.997 } 00:17:48.997 } 00:17:48.997 ]' 00:17:48.997 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.258 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:49.258 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.258 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:49.258 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.258 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.258 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.258 11:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.518 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:17:49.518 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:17:50.087 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.087 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:50.087 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.087 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.087 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.087 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.087 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.087 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:50.087 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:50.348 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:50.348 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.348 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:50.348 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:50.348 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:50.348 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.348 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.348 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.348 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.348 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.348 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.348 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.348 11:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.609 00:17:50.870 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.870 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.870 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.870 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.870 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.870 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.870 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.870 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.870 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.870 { 00:17:50.870 "cntlid": 89, 00:17:50.870 "qid": 0, 00:17:50.870 "state": "enabled", 00:17:50.870 "thread": "nvmf_tgt_poll_group_000", 00:17:50.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:50.870 "listen_address": { 00:17:50.870 "trtype": "TCP", 00:17:50.870 "adrfam": "IPv4", 00:17:50.870 "traddr": "10.0.0.2", 00:17:50.870 "trsvcid": "4420" 00:17:50.870 }, 00:17:50.870 "peer_address": { 00:17:50.870 "trtype": "TCP", 00:17:50.870 "adrfam": "IPv4", 00:17:50.870 "traddr": "10.0.0.1", 00:17:50.870 "trsvcid": "46800" 00:17:50.870 }, 00:17:50.870 "auth": { 00:17:50.870 "state": "completed", 00:17:50.870 "digest": "sha384", 00:17:50.870 "dhgroup": "ffdhe8192" 00:17:50.870 } 00:17:50.870 } 00:17:50.870 ]' 00:17:50.870 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.870 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:50.870 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.131 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:51.131 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.131 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.131 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.131 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.391 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:17:51.391 11:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:17:51.963 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.963 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:51.963 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.963 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.963 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.963 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.963 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:51.963 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:52.224 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:52.224 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.224 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:52.224 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:52.224 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:52.224 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.224 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.224 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.224 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.224 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.224 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.224 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.224 11:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.485 00:17:52.745 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.745 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.745 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.745 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.745 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.745 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.745 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.745 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.745 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.745 { 00:17:52.745 "cntlid": 91, 00:17:52.745 "qid": 0, 00:17:52.745 "state": "enabled", 00:17:52.745 "thread": "nvmf_tgt_poll_group_000", 00:17:52.745 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:52.745 "listen_address": { 00:17:52.745 "trtype": "TCP", 00:17:52.745 "adrfam": "IPv4", 00:17:52.745 "traddr": "10.0.0.2", 00:17:52.745 "trsvcid": "4420" 00:17:52.745 }, 00:17:52.745 "peer_address": { 00:17:52.745 "trtype": "TCP", 00:17:52.745 "adrfam": "IPv4", 00:17:52.745 "traddr": "10.0.0.1", 00:17:52.745 "trsvcid": "50736" 00:17:52.745 }, 00:17:52.745 "auth": { 00:17:52.745 "state": "completed", 00:17:52.745 "digest": "sha384", 00:17:52.745 "dhgroup": "ffdhe8192" 00:17:52.745 } 00:17:52.745 } 00:17:52.746 ]' 00:17:52.746 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.006 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:53.006 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.006 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:53.006 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.006 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.006 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.006 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.266 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: --dhchap-ctrl-secret DHHC-1:02:NjUzOTkxZjgxODA0NzY1YjRkMzgyZjMzZmI2OTBhZWI5ODQ2MTFmYTMxZGY5OTk3lOLrqw==: 00:17:53.266 11:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: --dhchap-ctrl-secret DHHC-1:02:NjUzOTkxZjgxODA0NzY1YjRkMzgyZjMzZmI2OTBhZWI5ODQ2MTFmYTMxZGY5OTk3lOLrqw==: 00:17:53.838 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.838 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:53.838 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.838 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.838 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.838 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.838 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:53.838 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:54.100 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:54.100 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.100 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:54.100 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:54.100 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:54.100 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.100 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.100 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.100 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.100 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.100 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.100 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.100 11:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.360 00:17:54.622 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.622 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.622 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.622 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.622 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.622 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.622 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.622 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.622 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.622 { 00:17:54.622 "cntlid": 93, 00:17:54.622 "qid": 0, 00:17:54.622 "state": "enabled", 00:17:54.622 "thread": "nvmf_tgt_poll_group_000", 00:17:54.622 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:54.622 "listen_address": { 00:17:54.622 "trtype": "TCP", 00:17:54.622 "adrfam": "IPv4", 00:17:54.622 "traddr": "10.0.0.2", 00:17:54.622 "trsvcid": "4420" 00:17:54.622 }, 00:17:54.622 "peer_address": { 00:17:54.622 "trtype": "TCP", 00:17:54.622 "adrfam": "IPv4", 00:17:54.622 "traddr": "10.0.0.1", 00:17:54.622 "trsvcid": "50768" 00:17:54.622 }, 00:17:54.622 "auth": { 00:17:54.622 "state": "completed", 00:17:54.622 "digest": "sha384", 00:17:54.622 "dhgroup": "ffdhe8192" 00:17:54.622 } 00:17:54.622 } 00:17:54.622 ]' 00:17:54.622 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.622 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:54.622 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.883 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:54.883 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.883 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.883 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.883 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.143 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:01:Y2NkN2U4ODFjOTcwNGUwZTZlZjY3YTA1N2I1Y2NkZjYlDPmE: 00:17:55.143 11:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:01:Y2NkN2U4ODFjOTcwNGUwZTZlZjY3YTA1N2I1Y2NkZjYlDPmE: 00:17:55.716 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.716 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:55.716 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.716 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.717 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.717 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.717 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:55.717 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:55.977 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:55.977 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.977 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:55.977 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:55.977 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:55.977 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.977 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:55.977 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.977 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.977 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.977 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:55.977 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:55.977 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:56.238 00:17:56.238 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.238 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.238 11:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.497 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.497 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.497 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.497 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.497 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.497 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.497 { 00:17:56.497 "cntlid": 95, 00:17:56.497 "qid": 0, 00:17:56.497 "state": "enabled", 00:17:56.497 "thread": "nvmf_tgt_poll_group_000", 00:17:56.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:56.497 "listen_address": { 00:17:56.497 "trtype": "TCP", 00:17:56.497 "adrfam": "IPv4", 00:17:56.497 "traddr": "10.0.0.2", 00:17:56.497 "trsvcid": "4420" 00:17:56.497 }, 00:17:56.497 "peer_address": { 00:17:56.497 "trtype": "TCP", 00:17:56.497 "adrfam": "IPv4", 00:17:56.497 "traddr": "10.0.0.1", 00:17:56.497 "trsvcid": "50796" 00:17:56.497 }, 00:17:56.497 "auth": { 00:17:56.497 "state": "completed", 00:17:56.497 "digest": "sha384", 00:17:56.497 "dhgroup": "ffdhe8192" 00:17:56.497 } 00:17:56.497 } 00:17:56.497 ]' 00:17:56.498 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.498 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:56.498 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.758 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:56.758 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.758 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.758 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.758 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.758 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:17:56.758 11:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:17:57.701 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.701 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:57.701 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.701 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.701 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.701 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:57.701 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:57.701 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.701 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:57.701 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:57.701 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:57.701 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.701 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:57.701 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:57.701 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:57.701 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.701 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.701 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.701 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.701 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.701 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.701 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.701 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.962 00:17:57.962 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.962 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.962 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.223 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.223 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.223 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.223 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.223 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.223 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.223 { 00:17:58.223 "cntlid": 97, 00:17:58.223 "qid": 0, 00:17:58.223 "state": "enabled", 00:17:58.223 "thread": "nvmf_tgt_poll_group_000", 00:17:58.223 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:58.223 "listen_address": { 00:17:58.223 "trtype": "TCP", 00:17:58.223 "adrfam": "IPv4", 00:17:58.223 "traddr": "10.0.0.2", 00:17:58.223 "trsvcid": "4420" 00:17:58.223 }, 00:17:58.223 "peer_address": { 00:17:58.223 "trtype": "TCP", 00:17:58.223 "adrfam": "IPv4", 00:17:58.223 "traddr": "10.0.0.1", 00:17:58.223 "trsvcid": "50818" 00:17:58.223 }, 00:17:58.223 "auth": { 00:17:58.223 "state": "completed", 00:17:58.223 "digest": "sha512", 00:17:58.223 "dhgroup": "null" 00:17:58.223 } 00:17:58.223 } 00:17:58.223 ]' 00:17:58.223 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.223 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.223 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.223 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:58.223 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.223 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.224 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.224 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.485 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:17:58.485 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:17:59.056 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.056 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.056 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:59.056 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.056 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.056 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.056 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.056 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:59.056 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:59.317 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:59.317 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.317 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:59.317 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:59.317 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:59.317 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.317 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.317 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.317 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.317 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.317 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.317 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.317 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.578 00:17:59.578 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.578 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.578 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.578 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.578 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.578 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.578 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.838 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.838 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.838 { 00:17:59.838 "cntlid": 99, 00:17:59.838 "qid": 0, 00:17:59.838 "state": "enabled", 00:17:59.838 "thread": "nvmf_tgt_poll_group_000", 00:17:59.838 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:59.838 "listen_address": { 00:17:59.838 "trtype": "TCP", 00:17:59.838 "adrfam": "IPv4", 00:17:59.838 "traddr": "10.0.0.2", 00:17:59.838 "trsvcid": "4420" 00:17:59.838 }, 00:17:59.838 "peer_address": { 00:17:59.838 "trtype": "TCP", 00:17:59.838 "adrfam": "IPv4", 00:17:59.838 "traddr": "10.0.0.1", 00:17:59.838 "trsvcid": "50846" 00:17:59.838 }, 00:17:59.838 "auth": { 00:17:59.838 "state": "completed", 00:17:59.838 "digest": "sha512", 00:17:59.838 "dhgroup": "null" 00:17:59.838 } 00:17:59.838 } 00:17:59.838 ]' 00:17:59.838 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.838 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.838 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.838 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:59.838 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.838 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.838 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.838 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.098 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: --dhchap-ctrl-secret DHHC-1:02:NjUzOTkxZjgxODA0NzY1YjRkMzgyZjMzZmI2OTBhZWI5ODQ2MTFmYTMxZGY5OTk3lOLrqw==: 00:18:00.098 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: --dhchap-ctrl-secret DHHC-1:02:NjUzOTkxZjgxODA0NzY1YjRkMzgyZjMzZmI2OTBhZWI5ODQ2MTFmYTMxZGY5OTk3lOLrqw==: 00:18:00.671 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.671 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:00.671 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.671 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.671 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.671 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.671 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:00.671 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:00.932 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:18:00.932 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.932 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:00.932 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:00.932 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:00.932 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.932 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.932 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.932 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.932 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.932 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.932 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.932 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.932 00:18:01.193 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.193 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.193 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.193 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.193 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.193 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.193 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.193 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.193 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.193 { 00:18:01.193 "cntlid": 101, 00:18:01.193 "qid": 0, 00:18:01.193 "state": "enabled", 00:18:01.193 "thread": "nvmf_tgt_poll_group_000", 00:18:01.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:01.193 "listen_address": { 00:18:01.193 "trtype": "TCP", 00:18:01.193 "adrfam": "IPv4", 00:18:01.193 "traddr": "10.0.0.2", 00:18:01.193 "trsvcid": "4420" 00:18:01.193 }, 00:18:01.193 "peer_address": { 00:18:01.193 "trtype": "TCP", 00:18:01.193 "adrfam": "IPv4", 00:18:01.193 "traddr": "10.0.0.1", 00:18:01.193 "trsvcid": "50872" 00:18:01.193 }, 00:18:01.193 "auth": { 00:18:01.193 "state": "completed", 00:18:01.193 "digest": "sha512", 00:18:01.193 "dhgroup": "null" 00:18:01.193 } 00:18:01.193 } 00:18:01.193 ]' 00:18:01.193 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.454 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.454 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.454 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:01.454 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.454 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.454 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.454 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.715 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:01:Y2NkN2U4ODFjOTcwNGUwZTZlZjY3YTA1N2I1Y2NkZjYlDPmE: 00:18:01.715 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:01:Y2NkN2U4ODFjOTcwNGUwZTZlZjY3YTA1N2I1Y2NkZjYlDPmE: 00:18:02.287 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.287 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:02.287 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.287 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.287 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.287 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.287 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:02.287 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:02.548 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:18:02.548 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.548 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:02.548 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:02.548 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:02.548 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.549 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:02.549 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.549 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.549 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.549 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:02.549 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:02.549 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:02.810 00:18:02.810 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.810 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.810 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.810 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.810 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.810 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.810 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.810 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.810 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.810 { 00:18:02.810 "cntlid": 103, 00:18:02.810 "qid": 0, 00:18:02.810 "state": "enabled", 00:18:02.810 "thread": "nvmf_tgt_poll_group_000", 00:18:02.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:02.810 "listen_address": { 00:18:02.810 "trtype": "TCP", 00:18:02.810 "adrfam": "IPv4", 00:18:02.810 "traddr": "10.0.0.2", 00:18:02.810 "trsvcid": "4420" 00:18:02.810 }, 00:18:02.810 "peer_address": { 00:18:02.810 "trtype": "TCP", 00:18:02.810 "adrfam": "IPv4", 00:18:02.810 "traddr": "10.0.0.1", 00:18:02.810 "trsvcid": "60520" 00:18:02.810 }, 00:18:02.810 "auth": { 00:18:02.810 "state": "completed", 00:18:02.810 "digest": "sha512", 00:18:02.810 "dhgroup": "null" 00:18:02.810 } 00:18:02.810 } 00:18:02.810 ]' 00:18:02.810 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.810 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.810 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.071 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:03.071 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.071 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.071 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.071 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.332 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:18:03.332 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:18:03.904 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.904 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:03.904 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.904 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.904 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.904 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.904 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.904 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:03.904 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:04.185 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:18:04.185 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.185 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:04.185 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:04.185 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:04.185 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.185 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.185 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.186 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.186 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.186 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.186 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.186 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.186 00:18:04.535 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.535 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.535 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.535 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.535 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.535 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.535 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.535 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.535 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.535 { 00:18:04.535 "cntlid": 105, 00:18:04.535 "qid": 0, 00:18:04.535 "state": "enabled", 00:18:04.535 "thread": "nvmf_tgt_poll_group_000", 00:18:04.535 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:04.535 "listen_address": { 00:18:04.535 "trtype": "TCP", 00:18:04.535 "adrfam": "IPv4", 00:18:04.535 "traddr": "10.0.0.2", 00:18:04.535 "trsvcid": "4420" 00:18:04.535 }, 00:18:04.535 "peer_address": { 00:18:04.535 "trtype": "TCP", 00:18:04.535 "adrfam": "IPv4", 00:18:04.535 "traddr": "10.0.0.1", 00:18:04.535 "trsvcid": "60558" 00:18:04.535 }, 00:18:04.535 "auth": { 00:18:04.535 "state": "completed", 00:18:04.535 "digest": "sha512", 00:18:04.535 "dhgroup": "ffdhe2048" 00:18:04.535 } 00:18:04.535 } 00:18:04.535 ]' 00:18:04.535 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.535 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.535 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.535 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:04.535 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.535 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.535 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.535 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.821 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:18:04.821 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:18:05.407 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.407 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:05.407 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.407 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.407 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.407 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.407 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:05.407 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:05.668 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:18:05.668 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.668 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:05.668 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:05.668 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:05.668 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.668 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.668 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.668 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.668 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.668 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.668 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.668 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.929 00:18:05.929 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.929 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.929 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.189 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.190 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.190 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.190 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.190 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.190 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.190 { 00:18:06.190 "cntlid": 107, 00:18:06.190 "qid": 0, 00:18:06.190 "state": "enabled", 00:18:06.190 "thread": "nvmf_tgt_poll_group_000", 00:18:06.190 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:06.190 "listen_address": { 00:18:06.190 "trtype": "TCP", 00:18:06.190 "adrfam": "IPv4", 00:18:06.190 "traddr": "10.0.0.2", 00:18:06.190 "trsvcid": "4420" 00:18:06.190 }, 00:18:06.190 "peer_address": { 00:18:06.190 "trtype": "TCP", 00:18:06.190 "adrfam": "IPv4", 00:18:06.190 "traddr": "10.0.0.1", 00:18:06.190 "trsvcid": "60582" 00:18:06.190 }, 00:18:06.190 "auth": { 00:18:06.190 "state": "completed", 00:18:06.190 "digest": "sha512", 00:18:06.190 "dhgroup": "ffdhe2048" 00:18:06.190 } 00:18:06.190 } 00:18:06.190 ]' 00:18:06.190 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.190 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.190 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.190 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:06.190 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.190 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.190 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.190 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.450 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: --dhchap-ctrl-secret DHHC-1:02:NjUzOTkxZjgxODA0NzY1YjRkMzgyZjMzZmI2OTBhZWI5ODQ2MTFmYTMxZGY5OTk3lOLrqw==: 00:18:06.450 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: --dhchap-ctrl-secret DHHC-1:02:NjUzOTkxZjgxODA0NzY1YjRkMzgyZjMzZmI2OTBhZWI5ODQ2MTFmYTMxZGY5OTk3lOLrqw==: 00:18:07.019 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.019 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:07.019 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.019 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.019 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.019 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.019 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:07.019 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:07.280 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:18:07.280 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.280 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:07.280 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:07.280 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:07.280 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.280 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.280 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.280 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.280 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.280 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.280 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.280 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.540 00:18:07.540 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.540 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.540 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.800 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.800 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.800 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.800 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.800 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.800 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.800 { 00:18:07.801 "cntlid": 109, 00:18:07.801 "qid": 0, 00:18:07.801 "state": "enabled", 00:18:07.801 "thread": "nvmf_tgt_poll_group_000", 00:18:07.801 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:07.801 "listen_address": { 00:18:07.801 "trtype": "TCP", 00:18:07.801 "adrfam": "IPv4", 00:18:07.801 "traddr": "10.0.0.2", 00:18:07.801 "trsvcid": "4420" 00:18:07.801 }, 00:18:07.801 "peer_address": { 00:18:07.801 "trtype": "TCP", 00:18:07.801 "adrfam": "IPv4", 00:18:07.801 "traddr": "10.0.0.1", 00:18:07.801 "trsvcid": "60608" 00:18:07.801 }, 00:18:07.801 "auth": { 00:18:07.801 "state": "completed", 00:18:07.801 "digest": "sha512", 00:18:07.801 "dhgroup": "ffdhe2048" 00:18:07.801 } 00:18:07.801 } 00:18:07.801 ]' 00:18:07.801 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.801 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:07.801 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.801 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:07.801 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.801 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.801 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.801 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.061 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:01:Y2NkN2U4ODFjOTcwNGUwZTZlZjY3YTA1N2I1Y2NkZjYlDPmE: 00:18:08.061 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:01:Y2NkN2U4ODFjOTcwNGUwZTZlZjY3YTA1N2I1Y2NkZjYlDPmE: 00:18:08.632 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.632 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.632 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:08.632 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.632 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.632 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.632 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.632 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:08.632 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:08.892 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:18:08.892 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.892 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:08.892 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:08.892 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:08.892 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.892 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:08.892 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.892 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.892 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.892 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:08.892 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:08.892 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:09.152 00:18:09.152 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.152 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.152 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.414 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.414 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.414 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.414 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.414 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.414 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.414 { 00:18:09.414 "cntlid": 111, 00:18:09.414 "qid": 0, 00:18:09.414 "state": "enabled", 00:18:09.414 "thread": "nvmf_tgt_poll_group_000", 00:18:09.414 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:09.414 "listen_address": { 00:18:09.414 "trtype": "TCP", 00:18:09.414 "adrfam": "IPv4", 00:18:09.414 "traddr": "10.0.0.2", 00:18:09.414 "trsvcid": "4420" 00:18:09.414 }, 00:18:09.414 "peer_address": { 00:18:09.414 "trtype": "TCP", 00:18:09.414 "adrfam": "IPv4", 00:18:09.414 "traddr": "10.0.0.1", 00:18:09.414 "trsvcid": "60624" 00:18:09.414 }, 00:18:09.414 "auth": { 00:18:09.414 "state": "completed", 00:18:09.414 "digest": "sha512", 00:18:09.414 "dhgroup": "ffdhe2048" 00:18:09.414 } 00:18:09.414 } 00:18:09.414 ]' 00:18:09.414 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.414 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.414 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.414 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:09.414 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.414 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.414 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.414 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.674 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:18:09.674 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:18:10.244 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.244 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:10.244 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.244 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.244 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.244 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:10.244 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.244 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:10.244 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:10.505 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:18:10.505 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.505 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:10.505 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:10.505 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:10.505 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.505 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.505 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.505 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.505 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.505 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.505 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.505 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.765 00:18:10.765 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.765 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.765 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.765 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.765 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.765 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.765 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.027 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.027 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.027 { 00:18:11.027 "cntlid": 113, 00:18:11.027 "qid": 0, 00:18:11.027 "state": "enabled", 00:18:11.027 "thread": "nvmf_tgt_poll_group_000", 00:18:11.027 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:11.027 "listen_address": { 00:18:11.027 "trtype": "TCP", 00:18:11.027 "adrfam": "IPv4", 00:18:11.027 "traddr": "10.0.0.2", 00:18:11.027 "trsvcid": "4420" 00:18:11.027 }, 00:18:11.027 "peer_address": { 00:18:11.027 "trtype": "TCP", 00:18:11.027 "adrfam": "IPv4", 00:18:11.027 "traddr": "10.0.0.1", 00:18:11.027 "trsvcid": "60652" 00:18:11.027 }, 00:18:11.027 "auth": { 00:18:11.027 "state": "completed", 00:18:11.027 "digest": "sha512", 00:18:11.027 "dhgroup": "ffdhe3072" 00:18:11.027 } 00:18:11.027 } 00:18:11.027 ]' 00:18:11.027 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.027 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.027 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.027 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:11.027 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.027 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.027 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.027 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.288 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:18:11.288 11:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:18:11.858 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.858 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:11.858 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.858 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.858 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.858 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.858 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:11.858 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:12.119 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:12.119 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.119 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:12.119 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:12.119 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:12.119 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.119 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.119 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.119 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.119 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.119 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.119 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.119 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.381 00:18:12.381 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.381 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.381 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.381 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.381 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.381 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.381 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.642 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.642 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.642 { 00:18:12.642 "cntlid": 115, 00:18:12.642 "qid": 0, 00:18:12.642 "state": "enabled", 00:18:12.642 "thread": "nvmf_tgt_poll_group_000", 00:18:12.642 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:12.642 "listen_address": { 00:18:12.642 "trtype": "TCP", 00:18:12.642 "adrfam": "IPv4", 00:18:12.642 "traddr": "10.0.0.2", 00:18:12.642 "trsvcid": "4420" 00:18:12.642 }, 00:18:12.642 "peer_address": { 00:18:12.642 "trtype": "TCP", 00:18:12.642 "adrfam": "IPv4", 00:18:12.642 "traddr": "10.0.0.1", 00:18:12.642 "trsvcid": "57442" 00:18:12.642 }, 00:18:12.642 "auth": { 00:18:12.642 "state": "completed", 00:18:12.642 "digest": "sha512", 00:18:12.642 "dhgroup": "ffdhe3072" 00:18:12.642 } 00:18:12.642 } 00:18:12.642 ]' 00:18:12.642 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.642 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:12.642 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.642 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:12.642 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.642 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.642 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.642 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.903 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: --dhchap-ctrl-secret DHHC-1:02:NjUzOTkxZjgxODA0NzY1YjRkMzgyZjMzZmI2OTBhZWI5ODQ2MTFmYTMxZGY5OTk3lOLrqw==: 00:18:12.903 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: --dhchap-ctrl-secret DHHC-1:02:NjUzOTkxZjgxODA0NzY1YjRkMzgyZjMzZmI2OTBhZWI5ODQ2MTFmYTMxZGY5OTk3lOLrqw==: 00:18:13.474 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.474 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:13.474 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.474 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.474 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.474 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.474 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:13.474 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:13.735 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:13.735 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.735 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:13.735 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:13.735 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:13.735 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.735 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.735 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.735 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.735 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.735 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.735 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.735 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.997 00:18:13.997 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.997 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.997 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.258 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.258 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.258 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.258 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.258 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.258 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.258 { 00:18:14.258 "cntlid": 117, 00:18:14.258 "qid": 0, 00:18:14.258 "state": "enabled", 00:18:14.258 "thread": "nvmf_tgt_poll_group_000", 00:18:14.258 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:14.258 "listen_address": { 00:18:14.258 "trtype": "TCP", 00:18:14.258 "adrfam": "IPv4", 00:18:14.258 "traddr": "10.0.0.2", 00:18:14.258 "trsvcid": "4420" 00:18:14.258 }, 00:18:14.258 "peer_address": { 00:18:14.258 "trtype": "TCP", 00:18:14.258 "adrfam": "IPv4", 00:18:14.258 "traddr": "10.0.0.1", 00:18:14.258 "trsvcid": "57470" 00:18:14.258 }, 00:18:14.258 "auth": { 00:18:14.258 "state": "completed", 00:18:14.258 "digest": "sha512", 00:18:14.258 "dhgroup": "ffdhe3072" 00:18:14.258 } 00:18:14.258 } 00:18:14.258 ]' 00:18:14.258 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.258 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:14.258 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.258 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:14.258 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.258 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.258 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.258 11:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.518 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:01:Y2NkN2U4ODFjOTcwNGUwZTZlZjY3YTA1N2I1Y2NkZjYlDPmE: 00:18:14.518 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:01:Y2NkN2U4ODFjOTcwNGUwZTZlZjY3YTA1N2I1Y2NkZjYlDPmE: 00:18:15.105 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.105 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:15.105 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.105 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.105 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.105 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.105 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:15.105 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:15.367 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:15.367 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.367 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:15.367 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:15.367 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:15.367 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.367 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:15.367 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.367 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.367 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.367 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:15.367 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:15.367 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:15.630 00:18:15.630 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.630 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.630 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.630 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.630 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.630 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.630 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.892 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.892 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.892 { 00:18:15.892 "cntlid": 119, 00:18:15.892 "qid": 0, 00:18:15.892 "state": "enabled", 00:18:15.892 "thread": "nvmf_tgt_poll_group_000", 00:18:15.892 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:15.892 "listen_address": { 00:18:15.892 "trtype": "TCP", 00:18:15.892 "adrfam": "IPv4", 00:18:15.892 "traddr": "10.0.0.2", 00:18:15.892 "trsvcid": "4420" 00:18:15.892 }, 00:18:15.892 "peer_address": { 00:18:15.892 "trtype": "TCP", 00:18:15.892 "adrfam": "IPv4", 00:18:15.892 "traddr": "10.0.0.1", 00:18:15.892 "trsvcid": "57498" 00:18:15.892 }, 00:18:15.892 "auth": { 00:18:15.892 "state": "completed", 00:18:15.892 "digest": "sha512", 00:18:15.892 "dhgroup": "ffdhe3072" 00:18:15.892 } 00:18:15.892 } 00:18:15.892 ]' 00:18:15.892 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.892 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.892 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.892 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:15.892 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.892 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.892 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.892 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.154 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:18:16.154 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:18:16.728 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.729 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:16.729 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.729 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.729 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.729 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:16.729 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.729 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:16.729 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:16.990 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:16.990 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:16.990 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:16.990 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:16.990 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:16.990 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.990 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.990 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.990 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.990 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.990 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.990 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.990 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.252 00:18:17.252 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.252 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.252 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.513 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.513 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.513 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.513 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.513 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.513 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.513 { 00:18:17.513 "cntlid": 121, 00:18:17.513 "qid": 0, 00:18:17.513 "state": "enabled", 00:18:17.513 "thread": "nvmf_tgt_poll_group_000", 00:18:17.513 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:17.513 "listen_address": { 00:18:17.513 "trtype": "TCP", 00:18:17.513 "adrfam": "IPv4", 00:18:17.513 "traddr": "10.0.0.2", 00:18:17.513 "trsvcid": "4420" 00:18:17.513 }, 00:18:17.513 "peer_address": { 00:18:17.513 "trtype": "TCP", 00:18:17.513 "adrfam": "IPv4", 00:18:17.513 "traddr": "10.0.0.1", 00:18:17.513 "trsvcid": "57524" 00:18:17.513 }, 00:18:17.513 "auth": { 00:18:17.513 "state": "completed", 00:18:17.513 "digest": "sha512", 00:18:17.513 "dhgroup": "ffdhe4096" 00:18:17.513 } 00:18:17.513 } 00:18:17.513 ]' 00:18:17.513 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.513 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.513 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.513 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:17.513 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.513 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.513 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.513 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.777 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:18:17.777 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:18:18.349 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.349 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:18.349 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.349 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.349 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.349 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.349 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:18.349 11:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:18.611 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:18.611 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.611 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:18.611 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:18.611 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:18.611 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.611 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.611 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.611 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.611 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.611 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.611 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.611 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.872 00:18:18.872 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.872 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.872 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.133 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.133 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.133 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.133 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.133 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.133 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.133 { 00:18:19.133 "cntlid": 123, 00:18:19.133 "qid": 0, 00:18:19.133 "state": "enabled", 00:18:19.133 "thread": "nvmf_tgt_poll_group_000", 00:18:19.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:19.133 "listen_address": { 00:18:19.133 "trtype": "TCP", 00:18:19.133 "adrfam": "IPv4", 00:18:19.133 "traddr": "10.0.0.2", 00:18:19.133 "trsvcid": "4420" 00:18:19.133 }, 00:18:19.133 "peer_address": { 00:18:19.133 "trtype": "TCP", 00:18:19.133 "adrfam": "IPv4", 00:18:19.133 "traddr": "10.0.0.1", 00:18:19.133 "trsvcid": "57546" 00:18:19.133 }, 00:18:19.133 "auth": { 00:18:19.133 "state": "completed", 00:18:19.133 "digest": "sha512", 00:18:19.133 "dhgroup": "ffdhe4096" 00:18:19.133 } 00:18:19.133 } 00:18:19.133 ]' 00:18:19.133 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.133 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.133 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.134 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:19.134 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.134 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.134 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.134 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.396 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: --dhchap-ctrl-secret DHHC-1:02:NjUzOTkxZjgxODA0NzY1YjRkMzgyZjMzZmI2OTBhZWI5ODQ2MTFmYTMxZGY5OTk3lOLrqw==: 00:18:19.396 11:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: --dhchap-ctrl-secret DHHC-1:02:NjUzOTkxZjgxODA0NzY1YjRkMzgyZjMzZmI2OTBhZWI5ODQ2MTFmYTMxZGY5OTk3lOLrqw==: 00:18:19.967 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.967 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:19.967 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.967 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.967 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.967 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.967 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:19.967 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:20.229 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:20.229 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.229 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:20.229 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:20.229 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:20.229 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.229 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.229 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.229 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.229 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.229 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.229 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.229 11:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.490 00:18:20.490 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.490 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.490 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.751 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.751 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.751 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.751 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.751 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.751 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.751 { 00:18:20.751 "cntlid": 125, 00:18:20.751 "qid": 0, 00:18:20.751 "state": "enabled", 00:18:20.751 "thread": "nvmf_tgt_poll_group_000", 00:18:20.751 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:20.751 "listen_address": { 00:18:20.751 "trtype": "TCP", 00:18:20.751 "adrfam": "IPv4", 00:18:20.751 "traddr": "10.0.0.2", 00:18:20.751 "trsvcid": "4420" 00:18:20.751 }, 00:18:20.751 "peer_address": { 00:18:20.751 "trtype": "TCP", 00:18:20.751 "adrfam": "IPv4", 00:18:20.751 "traddr": "10.0.0.1", 00:18:20.751 "trsvcid": "57574" 00:18:20.751 }, 00:18:20.751 "auth": { 00:18:20.751 "state": "completed", 00:18:20.751 "digest": "sha512", 00:18:20.751 "dhgroup": "ffdhe4096" 00:18:20.751 } 00:18:20.751 } 00:18:20.751 ]' 00:18:20.751 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.751 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.751 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.751 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:20.751 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.751 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.751 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.751 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.011 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:01:Y2NkN2U4ODFjOTcwNGUwZTZlZjY3YTA1N2I1Y2NkZjYlDPmE: 00:18:21.011 11:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:01:Y2NkN2U4ODFjOTcwNGUwZTZlZjY3YTA1N2I1Y2NkZjYlDPmE: 00:18:21.582 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.582 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:21.582 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.582 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.582 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.582 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:21.582 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:21.582 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:21.843 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:21.843 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:21.843 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:21.843 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:21.843 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:21.843 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.843 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:21.843 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.843 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.843 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.843 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:21.843 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:21.843 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:22.104 00:18:22.104 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.104 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.104 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.364 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.364 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.364 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.364 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.364 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.364 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.364 { 00:18:22.364 "cntlid": 127, 00:18:22.364 "qid": 0, 00:18:22.364 "state": "enabled", 00:18:22.364 "thread": "nvmf_tgt_poll_group_000", 00:18:22.364 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:22.364 "listen_address": { 00:18:22.364 "trtype": "TCP", 00:18:22.364 "adrfam": "IPv4", 00:18:22.364 "traddr": "10.0.0.2", 00:18:22.364 "trsvcid": "4420" 00:18:22.364 }, 00:18:22.364 "peer_address": { 00:18:22.364 "trtype": "TCP", 00:18:22.364 "adrfam": "IPv4", 00:18:22.364 "traddr": "10.0.0.1", 00:18:22.364 "trsvcid": "57602" 00:18:22.364 }, 00:18:22.364 "auth": { 00:18:22.364 "state": "completed", 00:18:22.364 "digest": "sha512", 00:18:22.364 "dhgroup": "ffdhe4096" 00:18:22.364 } 00:18:22.364 } 00:18:22.364 ]' 00:18:22.364 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.364 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.364 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.364 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:22.364 11:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.364 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.364 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.364 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.624 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:18:22.624 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:18:23.194 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.194 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:23.194 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.194 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.194 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.194 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:23.194 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:23.194 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:23.194 11:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:23.453 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:23.453 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.453 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:23.453 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:23.453 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:23.453 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.453 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.453 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.453 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.453 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.453 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.453 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.453 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.713 00:18:23.713 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.713 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.713 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.973 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.973 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.973 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.973 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.973 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.973 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.973 { 00:18:23.973 "cntlid": 129, 00:18:23.973 "qid": 0, 00:18:23.973 "state": "enabled", 00:18:23.973 "thread": "nvmf_tgt_poll_group_000", 00:18:23.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:23.973 "listen_address": { 00:18:23.973 "trtype": "TCP", 00:18:23.973 "adrfam": "IPv4", 00:18:23.973 "traddr": "10.0.0.2", 00:18:23.973 "trsvcid": "4420" 00:18:23.973 }, 00:18:23.973 "peer_address": { 00:18:23.973 "trtype": "TCP", 00:18:23.973 "adrfam": "IPv4", 00:18:23.973 "traddr": "10.0.0.1", 00:18:23.973 "trsvcid": "56052" 00:18:23.973 }, 00:18:23.973 "auth": { 00:18:23.973 "state": "completed", 00:18:23.973 "digest": "sha512", 00:18:23.973 "dhgroup": "ffdhe6144" 00:18:23.973 } 00:18:23.973 } 00:18:23.973 ]' 00:18:23.973 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.973 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:23.973 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.973 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:23.973 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.233 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.233 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.233 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.233 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:18:24.233 11:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:18:25.173 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.173 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:25.173 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.173 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.173 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.173 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:25.173 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:25.173 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:25.173 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:25.173 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:25.173 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:25.173 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:25.174 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:25.174 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.174 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.174 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.174 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.174 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.174 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.174 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.174 11:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.434 00:18:25.434 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.434 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.434 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.695 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.695 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.695 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.695 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.695 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.695 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.695 { 00:18:25.695 "cntlid": 131, 00:18:25.695 "qid": 0, 00:18:25.695 "state": "enabled", 00:18:25.695 "thread": "nvmf_tgt_poll_group_000", 00:18:25.695 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:25.695 "listen_address": { 00:18:25.695 "trtype": "TCP", 00:18:25.695 "adrfam": "IPv4", 00:18:25.695 "traddr": "10.0.0.2", 00:18:25.695 "trsvcid": "4420" 00:18:25.695 }, 00:18:25.695 "peer_address": { 00:18:25.695 "trtype": "TCP", 00:18:25.695 "adrfam": "IPv4", 00:18:25.695 "traddr": "10.0.0.1", 00:18:25.695 "trsvcid": "56090" 00:18:25.695 }, 00:18:25.695 "auth": { 00:18:25.695 "state": "completed", 00:18:25.695 "digest": "sha512", 00:18:25.695 "dhgroup": "ffdhe6144" 00:18:25.695 } 00:18:25.695 } 00:18:25.695 ]' 00:18:25.695 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.695 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:25.695 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.956 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:25.956 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.956 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.956 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.956 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.956 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: --dhchap-ctrl-secret DHHC-1:02:NjUzOTkxZjgxODA0NzY1YjRkMzgyZjMzZmI2OTBhZWI5ODQ2MTFmYTMxZGY5OTk3lOLrqw==: 00:18:25.956 11:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: --dhchap-ctrl-secret DHHC-1:02:NjUzOTkxZjgxODA0NzY1YjRkMzgyZjMzZmI2OTBhZWI5ODQ2MTFmYTMxZGY5OTk3lOLrqw==: 00:18:26.897 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.897 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:26.897 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.897 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.897 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.897 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:26.897 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:26.897 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:26.897 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:26.897 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.897 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:26.897 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:26.897 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:26.897 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.897 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.897 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.897 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.897 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.897 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.897 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.897 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.157 00:18:27.158 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:27.158 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:27.158 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.418 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.418 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.418 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.418 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.418 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.418 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:27.418 { 00:18:27.418 "cntlid": 133, 00:18:27.418 "qid": 0, 00:18:27.418 "state": "enabled", 00:18:27.418 "thread": "nvmf_tgt_poll_group_000", 00:18:27.418 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:27.418 "listen_address": { 00:18:27.418 "trtype": "TCP", 00:18:27.418 "adrfam": "IPv4", 00:18:27.418 "traddr": "10.0.0.2", 00:18:27.418 "trsvcid": "4420" 00:18:27.418 }, 00:18:27.418 "peer_address": { 00:18:27.418 "trtype": "TCP", 00:18:27.418 "adrfam": "IPv4", 00:18:27.418 "traddr": "10.0.0.1", 00:18:27.418 "trsvcid": "56118" 00:18:27.418 }, 00:18:27.418 "auth": { 00:18:27.418 "state": "completed", 00:18:27.418 "digest": "sha512", 00:18:27.418 "dhgroup": "ffdhe6144" 00:18:27.418 } 00:18:27.418 } 00:18:27.418 ]' 00:18:27.418 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:27.418 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:27.418 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:27.418 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:27.418 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:27.679 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.679 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.679 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.679 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:01:Y2NkN2U4ODFjOTcwNGUwZTZlZjY3YTA1N2I1Y2NkZjYlDPmE: 00:18:27.679 11:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:01:Y2NkN2U4ODFjOTcwNGUwZTZlZjY3YTA1N2I1Y2NkZjYlDPmE: 00:18:28.622 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.622 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:28.622 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.622 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.622 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.622 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:28.622 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:28.622 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:28.622 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:28.622 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.622 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:28.622 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:28.622 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:28.622 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.622 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:28.622 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.622 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.622 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.622 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:28.622 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:28.622 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:28.883 00:18:28.883 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.883 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.883 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.144 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.145 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.145 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.145 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.145 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.145 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:29.145 { 00:18:29.145 "cntlid": 135, 00:18:29.145 "qid": 0, 00:18:29.145 "state": "enabled", 00:18:29.145 "thread": "nvmf_tgt_poll_group_000", 00:18:29.145 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:29.145 "listen_address": { 00:18:29.145 "trtype": "TCP", 00:18:29.145 "adrfam": "IPv4", 00:18:29.145 "traddr": "10.0.0.2", 00:18:29.145 "trsvcid": "4420" 00:18:29.145 }, 00:18:29.145 "peer_address": { 00:18:29.145 "trtype": "TCP", 00:18:29.145 "adrfam": "IPv4", 00:18:29.145 "traddr": "10.0.0.1", 00:18:29.145 "trsvcid": "56160" 00:18:29.145 }, 00:18:29.145 "auth": { 00:18:29.145 "state": "completed", 00:18:29.145 "digest": "sha512", 00:18:29.145 "dhgroup": "ffdhe6144" 00:18:29.145 } 00:18:29.145 } 00:18:29.145 ]' 00:18:29.145 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:29.145 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:29.145 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:29.145 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:29.145 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:29.406 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.406 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.406 11:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.406 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:18:29.406 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:18:29.979 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.240 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:30.240 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.240 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.240 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.240 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:30.240 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:30.240 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:30.240 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:30.240 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:30.240 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:30.240 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:30.240 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:30.240 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:30.240 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.240 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.240 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.240 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.240 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.240 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.240 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.240 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.810 00:18:30.810 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:30.810 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:30.810 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.070 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.070 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.070 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.070 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.070 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.070 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.070 { 00:18:31.070 "cntlid": 137, 00:18:31.070 "qid": 0, 00:18:31.070 "state": "enabled", 00:18:31.070 "thread": "nvmf_tgt_poll_group_000", 00:18:31.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:31.070 "listen_address": { 00:18:31.070 "trtype": "TCP", 00:18:31.070 "adrfam": "IPv4", 00:18:31.070 "traddr": "10.0.0.2", 00:18:31.070 "trsvcid": "4420" 00:18:31.070 }, 00:18:31.070 "peer_address": { 00:18:31.070 "trtype": "TCP", 00:18:31.070 "adrfam": "IPv4", 00:18:31.070 "traddr": "10.0.0.1", 00:18:31.070 "trsvcid": "56176" 00:18:31.070 }, 00:18:31.070 "auth": { 00:18:31.070 "state": "completed", 00:18:31.070 "digest": "sha512", 00:18:31.070 "dhgroup": "ffdhe8192" 00:18:31.070 } 00:18:31.070 } 00:18:31.070 ]' 00:18:31.070 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:31.070 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:31.070 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:31.071 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:31.071 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:31.071 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.071 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.071 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.332 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:18:31.332 11:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:18:31.903 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.903 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:31.903 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.903 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.903 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.903 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:31.903 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:31.903 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:32.165 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:32.165 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:32.165 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:32.165 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:32.165 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:32.165 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.165 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.165 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.165 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.165 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.165 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.165 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.165 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.737 00:18:32.737 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:32.737 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:32.737 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.737 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.737 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.737 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.737 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.737 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.737 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:32.737 { 00:18:32.737 "cntlid": 139, 00:18:32.737 "qid": 0, 00:18:32.737 "state": "enabled", 00:18:32.737 "thread": "nvmf_tgt_poll_group_000", 00:18:32.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:32.737 "listen_address": { 00:18:32.737 "trtype": "TCP", 00:18:32.737 "adrfam": "IPv4", 00:18:32.737 "traddr": "10.0.0.2", 00:18:32.737 "trsvcid": "4420" 00:18:32.737 }, 00:18:32.737 "peer_address": { 00:18:32.737 "trtype": "TCP", 00:18:32.737 "adrfam": "IPv4", 00:18:32.737 "traddr": "10.0.0.1", 00:18:32.737 "trsvcid": "51598" 00:18:32.737 }, 00:18:32.737 "auth": { 00:18:32.737 "state": "completed", 00:18:32.737 "digest": "sha512", 00:18:32.737 "dhgroup": "ffdhe8192" 00:18:32.737 } 00:18:32.737 } 00:18:32.737 ]' 00:18:32.737 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:32.737 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:32.737 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:32.998 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:32.998 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:32.998 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.998 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.998 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.998 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: --dhchap-ctrl-secret DHHC-1:02:NjUzOTkxZjgxODA0NzY1YjRkMzgyZjMzZmI2OTBhZWI5ODQ2MTFmYTMxZGY5OTk3lOLrqw==: 00:18:32.998 11:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: --dhchap-ctrl-secret DHHC-1:02:NjUzOTkxZjgxODA0NzY1YjRkMzgyZjMzZmI2OTBhZWI5ODQ2MTFmYTMxZGY5OTk3lOLrqw==: 00:18:33.943 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.943 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:33.943 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.943 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.943 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.943 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:33.943 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:33.943 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:33.943 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:33.943 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:33.943 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:33.943 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:33.943 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:33.943 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.944 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.944 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.944 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.944 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.944 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.944 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.944 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.515 00:18:34.515 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:34.515 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:34.515 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.515 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.515 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.515 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.515 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.515 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.515 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:34.515 { 00:18:34.515 "cntlid": 141, 00:18:34.515 "qid": 0, 00:18:34.515 "state": "enabled", 00:18:34.515 "thread": "nvmf_tgt_poll_group_000", 00:18:34.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:34.515 "listen_address": { 00:18:34.515 "trtype": "TCP", 00:18:34.515 "adrfam": "IPv4", 00:18:34.515 "traddr": "10.0.0.2", 00:18:34.515 "trsvcid": "4420" 00:18:34.515 }, 00:18:34.515 "peer_address": { 00:18:34.515 "trtype": "TCP", 00:18:34.515 "adrfam": "IPv4", 00:18:34.515 "traddr": "10.0.0.1", 00:18:34.515 "trsvcid": "51634" 00:18:34.515 }, 00:18:34.515 "auth": { 00:18:34.515 "state": "completed", 00:18:34.515 "digest": "sha512", 00:18:34.515 "dhgroup": "ffdhe8192" 00:18:34.515 } 00:18:34.515 } 00:18:34.515 ]' 00:18:34.515 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:34.776 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:34.776 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:34.776 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:34.776 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:34.776 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.776 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.776 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.037 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:01:Y2NkN2U4ODFjOTcwNGUwZTZlZjY3YTA1N2I1Y2NkZjYlDPmE: 00:18:35.037 11:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:01:Y2NkN2U4ODFjOTcwNGUwZTZlZjY3YTA1N2I1Y2NkZjYlDPmE: 00:18:35.609 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.609 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:35.609 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.609 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.609 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.609 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:35.609 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:35.609 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:35.871 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:35.871 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:35.871 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:35.871 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:35.871 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:35.871 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.871 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:35.871 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.871 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.871 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.871 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:35.871 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:35.871 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:36.132 00:18:36.394 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.394 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.394 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.394 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.394 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.394 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.394 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.394 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.394 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.394 { 00:18:36.394 "cntlid": 143, 00:18:36.394 "qid": 0, 00:18:36.394 "state": "enabled", 00:18:36.394 "thread": "nvmf_tgt_poll_group_000", 00:18:36.394 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:36.394 "listen_address": { 00:18:36.394 "trtype": "TCP", 00:18:36.394 "adrfam": "IPv4", 00:18:36.394 "traddr": "10.0.0.2", 00:18:36.394 "trsvcid": "4420" 00:18:36.394 }, 00:18:36.394 "peer_address": { 00:18:36.394 "trtype": "TCP", 00:18:36.394 "adrfam": "IPv4", 00:18:36.394 "traddr": "10.0.0.1", 00:18:36.394 "trsvcid": "51662" 00:18:36.394 }, 00:18:36.394 "auth": { 00:18:36.394 "state": "completed", 00:18:36.394 "digest": "sha512", 00:18:36.394 "dhgroup": "ffdhe8192" 00:18:36.394 } 00:18:36.394 } 00:18:36.395 ]' 00:18:36.395 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.654 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:36.654 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:36.654 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:36.654 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:36.654 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.654 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.654 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.915 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:18:36.915 11:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:18:37.486 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.486 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.486 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.486 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.486 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.486 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:37.486 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:37.486 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:37.486 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:37.486 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:37.486 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:37.747 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:37.747 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:37.747 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:37.747 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:37.747 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:37.747 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.747 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.747 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.747 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.747 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.747 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.747 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.747 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.007 00:18:38.268 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:38.268 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:38.268 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.268 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.268 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.268 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.268 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.268 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.268 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.268 { 00:18:38.268 "cntlid": 145, 00:18:38.268 "qid": 0, 00:18:38.268 "state": "enabled", 00:18:38.268 "thread": "nvmf_tgt_poll_group_000", 00:18:38.268 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:38.268 "listen_address": { 00:18:38.268 "trtype": "TCP", 00:18:38.268 "adrfam": "IPv4", 00:18:38.268 "traddr": "10.0.0.2", 00:18:38.268 "trsvcid": "4420" 00:18:38.268 }, 00:18:38.268 "peer_address": { 00:18:38.268 "trtype": "TCP", 00:18:38.268 "adrfam": "IPv4", 00:18:38.268 "traddr": "10.0.0.1", 00:18:38.268 "trsvcid": "51698" 00:18:38.268 }, 00:18:38.268 "auth": { 00:18:38.268 "state": "completed", 00:18:38.268 "digest": "sha512", 00:18:38.268 "dhgroup": "ffdhe8192" 00:18:38.268 } 00:18:38.268 } 00:18:38.268 ]' 00:18:38.268 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.530 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:38.530 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.530 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:38.530 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.530 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.530 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.530 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.791 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:18:38.791 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjEwMWViNmNjNjc2ZWU4YzIwZGMwZDAzZmIzMjMxNTY2MDkzMjJhNTM2ZTQ0OTU1MMdlIQ==: --dhchap-ctrl-secret DHHC-1:03:NTFkY2JlNjkxZjYyYzY0NGY3MDQxODZlMTliYzZhNDg0M2Y3MzE0MGRhNzY4ZWFjODEzOTI3MTNlZGY5ZTE0MA1e11M=: 00:18:39.363 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.363 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:39.363 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.363 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.363 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.363 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:39.363 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.363 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.363 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.363 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:39.363 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:39.363 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:39.363 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:39.363 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.363 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:39.363 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.363 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:39.363 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:39.363 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:39.936 request: 00:18:39.936 { 00:18:39.936 "name": "nvme0", 00:18:39.936 "trtype": "tcp", 00:18:39.936 "traddr": "10.0.0.2", 00:18:39.936 "adrfam": "ipv4", 00:18:39.936 "trsvcid": "4420", 00:18:39.936 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:39.936 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:39.936 "prchk_reftag": false, 00:18:39.936 "prchk_guard": false, 00:18:39.936 "hdgst": false, 00:18:39.936 "ddgst": false, 00:18:39.936 "dhchap_key": "key2", 00:18:39.936 "allow_unrecognized_csi": false, 00:18:39.936 "method": "bdev_nvme_attach_controller", 00:18:39.936 "req_id": 1 00:18:39.936 } 00:18:39.936 Got JSON-RPC error response 00:18:39.936 response: 00:18:39.936 { 00:18:39.936 "code": -5, 00:18:39.936 "message": "Input/output error" 00:18:39.936 } 00:18:39.936 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:39.936 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:39.936 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:39.936 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:39.936 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:39.936 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.936 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.936 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.936 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.936 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.936 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.936 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.936 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:39.936 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:39.936 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:39.936 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:39.936 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.936 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:39.936 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.936 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:39.936 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:39.936 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:40.197 request: 00:18:40.197 { 00:18:40.197 "name": "nvme0", 00:18:40.197 "trtype": "tcp", 00:18:40.197 "traddr": "10.0.0.2", 00:18:40.197 "adrfam": "ipv4", 00:18:40.197 "trsvcid": "4420", 00:18:40.197 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:40.197 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:40.197 "prchk_reftag": false, 00:18:40.197 "prchk_guard": false, 00:18:40.197 "hdgst": false, 00:18:40.197 "ddgst": false, 00:18:40.197 "dhchap_key": "key1", 00:18:40.197 "dhchap_ctrlr_key": "ckey2", 00:18:40.197 "allow_unrecognized_csi": false, 00:18:40.197 "method": "bdev_nvme_attach_controller", 00:18:40.197 "req_id": 1 00:18:40.197 } 00:18:40.197 Got JSON-RPC error response 00:18:40.197 response: 00:18:40.197 { 00:18:40.197 "code": -5, 00:18:40.197 "message": "Input/output error" 00:18:40.197 } 00:18:40.197 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:40.197 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:40.197 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:40.197 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:40.197 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:40.197 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.197 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.197 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.197 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:40.197 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.197 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.197 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.197 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.197 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:40.197 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.197 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:40.197 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:40.197 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:40.197 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:40.197 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.197 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.197 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.769 request: 00:18:40.769 { 00:18:40.769 "name": "nvme0", 00:18:40.769 "trtype": "tcp", 00:18:40.769 "traddr": "10.0.0.2", 00:18:40.769 "adrfam": "ipv4", 00:18:40.769 "trsvcid": "4420", 00:18:40.769 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:40.769 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:40.769 "prchk_reftag": false, 00:18:40.769 "prchk_guard": false, 00:18:40.769 "hdgst": false, 00:18:40.769 "ddgst": false, 00:18:40.769 "dhchap_key": "key1", 00:18:40.769 "dhchap_ctrlr_key": "ckey1", 00:18:40.769 "allow_unrecognized_csi": false, 00:18:40.769 "method": "bdev_nvme_attach_controller", 00:18:40.769 "req_id": 1 00:18:40.769 } 00:18:40.769 Got JSON-RPC error response 00:18:40.769 response: 00:18:40.769 { 00:18:40.769 "code": -5, 00:18:40.769 "message": "Input/output error" 00:18:40.769 } 00:18:40.769 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:40.769 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:40.769 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:40.769 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:40.769 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:40.769 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.769 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.769 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.769 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2705878 00:18:40.769 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2705878 ']' 00:18:40.769 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2705878 00:18:40.769 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:40.769 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:40.769 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2705878 00:18:40.769 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:40.769 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:40.769 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2705878' 00:18:40.769 killing process with pid 2705878 00:18:40.769 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2705878 00:18:40.769 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2705878 00:18:41.079 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:41.079 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:41.079 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:41.079 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.079 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2731978 00:18:41.079 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2731978 00:18:41.079 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:41.079 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2731978 ']' 00:18:41.079 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.079 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:41.079 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.079 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:41.079 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.686 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.686 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:41.686 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:41.686 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:41.686 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.946 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:41.946 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:41.946 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2731978 00:18:41.946 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2731978 ']' 00:18:41.946 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.946 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:41.946 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.946 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:41.946 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.946 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.946 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:41.946 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:41.946 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.946 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.207 null0 00:18:42.207 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.207 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:42.207 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.onJ 00:18:42.207 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.207 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.207 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.207 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.gve ]] 00:18:42.207 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gve 00:18:42.207 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.207 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.207 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.207 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:42.207 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Rqm 00:18:42.207 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.207 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.207 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.207 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.cVx ]] 00:18:42.208 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cVx 00:18:42.208 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.208 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.208 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.208 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:42.208 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Hkz 00:18:42.208 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.208 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.208 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.208 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.n8O ]] 00:18:42.208 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.n8O 00:18:42.208 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.208 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.208 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.208 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:42.208 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.2Yj 00:18:42.208 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.208 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.208 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.208 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:42.208 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:42.208 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:42.208 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:42.208 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:42.208 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:42.208 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.208 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:42.208 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.208 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.208 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.208 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:42.208 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:42.208 11:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:43.148 nvme0n1 00:18:43.148 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:43.148 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:43.148 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.148 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.148 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.148 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.148 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.148 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.148 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.148 { 00:18:43.148 "cntlid": 1, 00:18:43.148 "qid": 0, 00:18:43.148 "state": "enabled", 00:18:43.148 "thread": "nvmf_tgt_poll_group_000", 00:18:43.148 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:43.148 "listen_address": { 00:18:43.148 "trtype": "TCP", 00:18:43.148 "adrfam": "IPv4", 00:18:43.148 "traddr": "10.0.0.2", 00:18:43.148 "trsvcid": "4420" 00:18:43.148 }, 00:18:43.148 "peer_address": { 00:18:43.148 "trtype": "TCP", 00:18:43.148 "adrfam": "IPv4", 00:18:43.148 "traddr": "10.0.0.1", 00:18:43.148 "trsvcid": "58978" 00:18:43.148 }, 00:18:43.148 "auth": { 00:18:43.148 "state": "completed", 00:18:43.148 "digest": "sha512", 00:18:43.148 "dhgroup": "ffdhe8192" 00:18:43.148 } 00:18:43.148 } 00:18:43.148 ]' 00:18:43.148 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:43.148 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:43.148 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:43.148 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:43.148 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:43.408 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.408 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.408 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.408 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:18:43.408 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:18:43.978 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.238 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:44.238 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.238 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.238 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.238 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:44.238 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.238 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.238 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.238 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:44.238 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:44.238 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:44.238 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:44.238 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:44.238 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:44.238 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:44.238 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:44.238 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:44.238 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:44.238 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:44.238 11:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:44.498 request: 00:18:44.498 { 00:18:44.498 "name": "nvme0", 00:18:44.498 "trtype": "tcp", 00:18:44.498 "traddr": "10.0.0.2", 00:18:44.498 "adrfam": "ipv4", 00:18:44.498 "trsvcid": "4420", 00:18:44.498 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:44.498 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:44.498 "prchk_reftag": false, 00:18:44.498 "prchk_guard": false, 00:18:44.498 "hdgst": false, 00:18:44.498 "ddgst": false, 00:18:44.498 "dhchap_key": "key3", 00:18:44.498 "allow_unrecognized_csi": false, 00:18:44.498 "method": "bdev_nvme_attach_controller", 00:18:44.498 "req_id": 1 00:18:44.498 } 00:18:44.498 Got JSON-RPC error response 00:18:44.498 response: 00:18:44.498 { 00:18:44.498 "code": -5, 00:18:44.498 "message": "Input/output error" 00:18:44.498 } 00:18:44.498 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:44.498 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:44.498 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:44.498 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:44.498 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:44.498 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:44.498 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:44.498 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:44.759 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:44.759 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:44.759 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:44.759 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:44.759 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:44.759 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:44.759 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:44.759 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:44.759 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:44.759 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:44.759 request: 00:18:44.759 { 00:18:44.759 "name": "nvme0", 00:18:44.759 "trtype": "tcp", 00:18:44.759 "traddr": "10.0.0.2", 00:18:44.759 "adrfam": "ipv4", 00:18:44.759 "trsvcid": "4420", 00:18:44.759 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:44.759 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:44.759 "prchk_reftag": false, 00:18:44.759 "prchk_guard": false, 00:18:44.759 "hdgst": false, 00:18:44.759 "ddgst": false, 00:18:44.759 "dhchap_key": "key3", 00:18:44.759 "allow_unrecognized_csi": false, 00:18:44.759 "method": "bdev_nvme_attach_controller", 00:18:44.759 "req_id": 1 00:18:44.759 } 00:18:44.759 Got JSON-RPC error response 00:18:44.759 response: 00:18:44.759 { 00:18:44.759 "code": -5, 00:18:44.759 "message": "Input/output error" 00:18:44.759 } 00:18:44.759 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:44.759 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:44.759 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:44.759 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:44.759 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:44.759 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:44.759 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:44.759 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:44.759 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:44.759 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:45.020 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:45.020 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.020 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.020 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.020 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:45.020 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.020 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.020 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.020 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:45.020 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:45.020 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:45.020 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:45.020 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:45.020 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:45.020 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:45.020 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:45.020 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:45.020 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:45.280 request: 00:18:45.280 { 00:18:45.280 "name": "nvme0", 00:18:45.280 "trtype": "tcp", 00:18:45.280 "traddr": "10.0.0.2", 00:18:45.280 "adrfam": "ipv4", 00:18:45.280 "trsvcid": "4420", 00:18:45.280 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:45.280 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:45.280 "prchk_reftag": false, 00:18:45.280 "prchk_guard": false, 00:18:45.280 "hdgst": false, 00:18:45.280 "ddgst": false, 00:18:45.280 "dhchap_key": "key0", 00:18:45.280 "dhchap_ctrlr_key": "key1", 00:18:45.280 "allow_unrecognized_csi": false, 00:18:45.280 "method": "bdev_nvme_attach_controller", 00:18:45.280 "req_id": 1 00:18:45.280 } 00:18:45.280 Got JSON-RPC error response 00:18:45.280 response: 00:18:45.280 { 00:18:45.280 "code": -5, 00:18:45.280 "message": "Input/output error" 00:18:45.280 } 00:18:45.280 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:45.280 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:45.280 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:45.280 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:45.280 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:45.280 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:45.280 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:45.540 nvme0n1 00:18:45.540 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:45.540 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:45.540 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.801 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.801 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.801 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.062 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:46.062 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.062 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.062 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.062 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:46.062 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:46.062 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:46.631 nvme0n1 00:18:46.631 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:46.631 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:46.631 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.891 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.891 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:46.891 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.891 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.891 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.891 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:46.891 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:46.891 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.151 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.151 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:18:47.151 11:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: --dhchap-ctrl-secret DHHC-1:03:MmJjNDNkMTc2MGY5MDMxZTI1NjY4NTBiNDBkMmE3MGE5M2RjMDQ1YTNhMDBkMzE0MDBjZjQ3YTc4MTE4MDczN90Hoe8=: 00:18:47.723 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:47.723 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:47.723 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:47.723 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:47.723 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:47.723 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:47.723 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:47.723 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.723 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.984 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:47.984 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:47.984 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:47.984 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:47.984 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:47.984 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:47.984 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:47.984 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:47.984 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:47.984 11:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:48.556 request: 00:18:48.556 { 00:18:48.556 "name": "nvme0", 00:18:48.556 "trtype": "tcp", 00:18:48.556 "traddr": "10.0.0.2", 00:18:48.556 "adrfam": "ipv4", 00:18:48.556 "trsvcid": "4420", 00:18:48.556 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:48.556 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:48.556 "prchk_reftag": false, 00:18:48.556 "prchk_guard": false, 00:18:48.556 "hdgst": false, 00:18:48.556 "ddgst": false, 00:18:48.556 "dhchap_key": "key1", 00:18:48.556 "allow_unrecognized_csi": false, 00:18:48.556 "method": "bdev_nvme_attach_controller", 00:18:48.556 "req_id": 1 00:18:48.556 } 00:18:48.556 Got JSON-RPC error response 00:18:48.556 response: 00:18:48.556 { 00:18:48.556 "code": -5, 00:18:48.556 "message": "Input/output error" 00:18:48.556 } 00:18:48.556 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:48.556 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:48.556 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:48.556 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:48.556 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:48.556 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:48.556 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:49.127 nvme0n1 00:18:49.127 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:49.127 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:49.127 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.388 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.388 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.388 11:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.388 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:49.388 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.388 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.649 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.649 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:49.649 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:49.649 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:49.649 nvme0n1 00:18:49.649 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:49.649 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.649 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:49.909 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.909 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.909 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.170 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:50.170 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.170 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.170 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.170 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: '' 2s 00:18:50.170 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:50.170 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:50.170 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: 00:18:50.170 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:50.170 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:50.170 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:50.170 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: ]] 00:18:50.170 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YmY4YjUyZWQwZGFhMDU3ZWE4MjBkODczODMzNDY0Y2W8UKNG: 00:18:50.170 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:50.170 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:50.170 11:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:52.079 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:52.079 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:52.079 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:52.079 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:52.079 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:52.079 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:52.079 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:52.079 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:52.079 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.079 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.079 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.079 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: 2s 00:18:52.079 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:52.079 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:52.079 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:52.080 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: 00:18:52.080 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:52.080 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:52.080 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:52.080 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: ]] 00:18:52.080 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:OGU1MjRlOGY0NzBjY2RhY2M5ZTFlODlkZGUzZjM3MTA4YmViMDE0ZTY2NjA5NWQwZBKMRg==: 00:18:52.080 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:52.080 11:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:54.623 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:54.623 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:54.623 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:54.623 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:54.623 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:54.623 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:54.623 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:54.623 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.623 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:54.623 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.623 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.623 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.623 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:54.623 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:54.623 11:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:54.883 nvme0n1 00:18:54.883 11:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:54.883 11:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.883 11:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.883 11:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.883 11:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:54.883 11:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:55.452 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:55.452 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:55.452 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.711 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.711 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:55.711 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.711 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.711 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.711 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:55.711 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:55.711 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:55.711 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.711 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:55.970 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.970 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:55.970 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.970 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.970 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.970 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:55.970 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:55.970 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:55.970 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:55.970 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.970 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:55.970 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:55.970 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:55.970 11:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:56.538 request: 00:18:56.538 { 00:18:56.538 "name": "nvme0", 00:18:56.538 "dhchap_key": "key1", 00:18:56.538 "dhchap_ctrlr_key": "key3", 00:18:56.538 "method": "bdev_nvme_set_keys", 00:18:56.538 "req_id": 1 00:18:56.538 } 00:18:56.538 Got JSON-RPC error response 00:18:56.538 response: 00:18:56.538 { 00:18:56.538 "code": -13, 00:18:56.538 "message": "Permission denied" 00:18:56.538 } 00:18:56.538 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:56.538 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:56.538 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:56.538 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:56.538 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:56.538 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.538 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:56.538 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:56.538 11:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:57.920 11:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:57.920 11:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:57.920 11:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.920 11:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:57.920 11:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:57.920 11:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.920 11:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.920 11:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.920 11:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:57.920 11:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:57.920 11:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:58.492 nvme0n1 00:18:58.492 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:58.492 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.492 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.492 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.492 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:58.492 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:58.492 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:58.492 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:58.492 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:58.492 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:58.492 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:58.492 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:58.492 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:59.063 request: 00:18:59.063 { 00:18:59.063 "name": "nvme0", 00:18:59.063 "dhchap_key": "key2", 00:18:59.063 "dhchap_ctrlr_key": "key0", 00:18:59.063 "method": "bdev_nvme_set_keys", 00:18:59.063 "req_id": 1 00:18:59.063 } 00:18:59.063 Got JSON-RPC error response 00:18:59.063 response: 00:18:59.063 { 00:18:59.063 "code": -13, 00:18:59.063 "message": "Permission denied" 00:18:59.063 } 00:18:59.063 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:59.063 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:59.063 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:59.063 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:59.063 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:59.063 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:59.063 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.323 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:59.324 11:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:19:00.264 11:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:00.264 11:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:00.264 11:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.523 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:19:00.523 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:19:00.523 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:19:00.523 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2705914 00:19:00.523 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2705914 ']' 00:19:00.523 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2705914 00:19:00.523 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:00.523 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.523 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2705914 00:19:00.523 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:00.523 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:00.523 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2705914' 00:19:00.523 killing process with pid 2705914 00:19:00.523 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2705914 00:19:00.523 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2705914 00:19:00.783 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:00.783 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:00.783 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:19:00.783 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:00.783 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:19:00.783 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:00.783 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:00.783 rmmod nvme_tcp 00:19:00.783 rmmod nvme_fabrics 00:19:00.783 rmmod nvme_keyring 00:19:00.783 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:00.783 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:19:00.783 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:19:00.783 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2731978 ']' 00:19:00.783 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2731978 00:19:00.783 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2731978 ']' 00:19:00.783 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2731978 00:19:00.783 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:00.783 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.783 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2731978 00:19:00.783 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:00.783 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:00.783 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2731978' 00:19:00.783 killing process with pid 2731978 00:19:00.783 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2731978 00:19:00.783 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2731978 00:19:01.043 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:01.043 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:01.043 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:01.043 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:19:01.043 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:19:01.043 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:01.043 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:01.043 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:01.043 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:01.043 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.043 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:01.043 11:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.952 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:02.952 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.onJ /tmp/spdk.key-sha256.Rqm /tmp/spdk.key-sha384.Hkz /tmp/spdk.key-sha512.2Yj /tmp/spdk.key-sha512.gve /tmp/spdk.key-sha384.cVx /tmp/spdk.key-sha256.n8O '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:02.952 00:19:02.952 real 2m36.698s 00:19:02.952 user 5m52.255s 00:19:02.952 sys 0m24.844s 00:19:02.952 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:02.952 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.952 ************************************ 00:19:02.952 END TEST nvmf_auth_target 00:19:02.952 ************************************ 00:19:02.952 11:19:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:02.952 11:19:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:02.953 11:19:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:02.953 11:19:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:02.953 11:19:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:03.214 ************************************ 00:19:03.214 START TEST nvmf_bdevio_no_huge 00:19:03.214 ************************************ 00:19:03.214 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:03.214 * Looking for test storage... 00:19:03.214 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:03.214 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:03.214 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:19:03.214 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:03.214 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:03.214 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:03.214 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:03.214 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:03.214 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:19:03.214 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:19:03.214 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:19:03.214 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:19:03.214 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:03.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.215 --rc genhtml_branch_coverage=1 00:19:03.215 --rc genhtml_function_coverage=1 00:19:03.215 --rc genhtml_legend=1 00:19:03.215 --rc geninfo_all_blocks=1 00:19:03.215 --rc geninfo_unexecuted_blocks=1 00:19:03.215 00:19:03.215 ' 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:03.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.215 --rc genhtml_branch_coverage=1 00:19:03.215 --rc genhtml_function_coverage=1 00:19:03.215 --rc genhtml_legend=1 00:19:03.215 --rc geninfo_all_blocks=1 00:19:03.215 --rc geninfo_unexecuted_blocks=1 00:19:03.215 00:19:03.215 ' 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:03.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.215 --rc genhtml_branch_coverage=1 00:19:03.215 --rc genhtml_function_coverage=1 00:19:03.215 --rc genhtml_legend=1 00:19:03.215 --rc geninfo_all_blocks=1 00:19:03.215 --rc geninfo_unexecuted_blocks=1 00:19:03.215 00:19:03.215 ' 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:03.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.215 --rc genhtml_branch_coverage=1 00:19:03.215 --rc genhtml_function_coverage=1 00:19:03.215 --rc genhtml_legend=1 00:19:03.215 --rc geninfo_all_blocks=1 00:19:03.215 --rc geninfo_unexecuted_blocks=1 00:19:03.215 00:19:03.215 ' 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:03.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:03.215 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:03.216 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:03.216 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:03.216 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:03.216 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:03.216 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.216 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:03.216 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.216 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:03.216 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:03.216 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:19:03.216 11:19:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:11.355 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:11.355 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:19:11.355 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:11.355 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:11.355 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:11.355 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:11.355 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:11.355 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:19:11.355 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:11.355 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:19:11.355 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:19:11.355 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:19:11.355 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:19:11.355 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:19:11.355 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:11.356 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:11.356 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:11.356 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:11.356 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:11.356 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:11.357 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:11.357 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:11.357 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:11.357 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:11.357 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:11.357 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:11.357 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:11.357 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:11.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:11.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.505 ms 00:19:11.357 00:19:11.357 --- 10.0.0.2 ping statistics --- 00:19:11.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.357 rtt min/avg/max/mdev = 0.505/0.505/0.505/0.000 ms 00:19:11.357 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:11.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:11.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:19:11.357 00:19:11.357 --- 10.0.0.1 ping statistics --- 00:19:11.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.357 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:19:11.357 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:11.357 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:19:11.357 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:11.357 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:11.357 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:11.357 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:11.357 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:11.357 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:11.357 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:11.357 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:11.357 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:11.357 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:11.357 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:11.357 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2740176 00:19:11.357 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2740176 00:19:11.357 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:11.357 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2740176 ']' 00:19:11.357 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.357 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:11.357 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.357 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:11.357 11:20:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:11.357 [2024-11-20 11:20:03.549794] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:19:11.357 [2024-11-20 11:20:03.549867] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:11.357 [2024-11-20 11:20:03.655036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:11.357 [2024-11-20 11:20:03.715124] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:11.357 [2024-11-20 11:20:03.715180] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:11.357 [2024-11-20 11:20:03.715190] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:11.357 [2024-11-20 11:20:03.715197] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:11.357 [2024-11-20 11:20:03.715203] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:11.357 [2024-11-20 11:20:03.717056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:11.357 [2024-11-20 11:20:03.717381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:11.357 [2024-11-20 11:20:03.717218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:11.357 [2024-11-20 11:20:03.717380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:19:11.929 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:11.929 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:19:11.929 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:11.929 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:11.929 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:11.929 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.929 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:11.929 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.929 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:11.929 [2024-11-20 11:20:04.424972] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:11.929 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.929 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:11.929 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.929 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:11.929 Malloc0 00:19:11.929 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.929 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:11.929 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.929 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:11.929 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.929 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:11.929 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.929 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:11.929 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.929 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:11.929 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.929 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:11.929 [2024-11-20 11:20:04.478733] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:11.929 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.929 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:11.929 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:11.929 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:19:11.929 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:19:11.929 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:11.929 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:11.929 { 00:19:11.929 "params": { 00:19:11.929 "name": "Nvme$subsystem", 00:19:11.929 "trtype": "$TEST_TRANSPORT", 00:19:11.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:11.929 "adrfam": "ipv4", 00:19:11.929 "trsvcid": "$NVMF_PORT", 00:19:11.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:11.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:11.929 "hdgst": ${hdgst:-false}, 00:19:11.929 "ddgst": ${ddgst:-false} 00:19:11.929 }, 00:19:11.929 "method": "bdev_nvme_attach_controller" 00:19:11.929 } 00:19:11.929 EOF 00:19:11.929 )") 00:19:11.929 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:19:11.929 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:19:11.929 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:19:11.929 11:20:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:11.929 "params": { 00:19:11.929 "name": "Nvme1", 00:19:11.929 "trtype": "tcp", 00:19:11.929 "traddr": "10.0.0.2", 00:19:11.929 "adrfam": "ipv4", 00:19:11.929 "trsvcid": "4420", 00:19:11.929 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:11.929 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:11.929 "hdgst": false, 00:19:11.929 "ddgst": false 00:19:11.929 }, 00:19:11.929 "method": "bdev_nvme_attach_controller" 00:19:11.929 }' 00:19:11.929 [2024-11-20 11:20:04.536418] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:19:11.929 [2024-11-20 11:20:04.536495] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2740427 ] 00:19:11.929 [2024-11-20 11:20:04.635857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:12.190 [2024-11-20 11:20:04.695965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:12.190 [2024-11-20 11:20:04.696130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.190 [2024-11-20 11:20:04.696130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.451 I/O targets: 00:19:12.451 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:12.451 00:19:12.451 00:19:12.451 CUnit - A unit testing framework for C - Version 2.1-3 00:19:12.451 http://cunit.sourceforge.net/ 00:19:12.451 00:19:12.451 00:19:12.451 Suite: bdevio tests on: Nvme1n1 00:19:12.451 Test: blockdev write read block ...passed 00:19:12.451 Test: blockdev write zeroes read block ...passed 00:19:12.451 Test: blockdev write zeroes read no split ...passed 00:19:12.451 Test: blockdev write zeroes read split ...passed 00:19:12.451 Test: blockdev write zeroes read split partial ...passed 00:19:12.451 Test: blockdev reset ...[2024-11-20 11:20:05.182378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:12.451 [2024-11-20 11:20:05.182479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e2800 (9): Bad file descriptor 00:19:12.712 [2024-11-20 11:20:05.198670] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:19:12.712 passed 00:19:12.712 Test: blockdev write read 8 blocks ...passed 00:19:12.712 Test: blockdev write read size > 128k ...passed 00:19:12.712 Test: blockdev write read invalid size ...passed 00:19:12.712 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:12.712 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:12.712 Test: blockdev write read max offset ...passed 00:19:12.712 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:12.712 Test: blockdev writev readv 8 blocks ...passed 00:19:12.712 Test: blockdev writev readv 30 x 1block ...passed 00:19:12.712 Test: blockdev writev readv block ...passed 00:19:12.712 Test: blockdev writev readv size > 128k ...passed 00:19:12.712 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:12.712 Test: blockdev comparev and writev ...[2024-11-20 11:20:05.423013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:12.712 [2024-11-20 11:20:05.423063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:12.712 [2024-11-20 11:20:05.423079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:12.712 [2024-11-20 11:20:05.423088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:12.712 [2024-11-20 11:20:05.423681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:12.712 [2024-11-20 11:20:05.423696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:12.712 [2024-11-20 11:20:05.423710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:12.712 [2024-11-20 11:20:05.423718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:12.712 [2024-11-20 11:20:05.424267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:12.712 [2024-11-20 11:20:05.424279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:12.712 [2024-11-20 11:20:05.424293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:12.712 [2024-11-20 11:20:05.424301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:12.712 [2024-11-20 11:20:05.424857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:12.712 [2024-11-20 11:20:05.424868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:12.712 [2024-11-20 11:20:05.424882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:12.712 [2024-11-20 11:20:05.424890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:12.973 passed 00:19:12.973 Test: blockdev nvme passthru rw ...passed 00:19:12.973 Test: blockdev nvme passthru vendor specific ...[2024-11-20 11:20:05.511022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:12.973 [2024-11-20 11:20:05.511040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:12.973 [2024-11-20 11:20:05.511431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:12.973 [2024-11-20 11:20:05.511444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:12.973 [2024-11-20 11:20:05.511831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:12.973 [2024-11-20 11:20:05.511842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:12.973 [2024-11-20 11:20:05.512224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:12.973 [2024-11-20 11:20:05.512237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:12.973 passed 00:19:12.973 Test: blockdev nvme admin passthru ...passed 00:19:12.973 Test: blockdev copy ...passed 00:19:12.973 00:19:12.973 Run Summary: Type Total Ran Passed Failed Inactive 00:19:12.973 suites 1 1 n/a 0 0 00:19:12.973 tests 23 23 23 0 0 00:19:12.973 asserts 152 152 152 0 n/a 00:19:12.973 00:19:12.973 Elapsed time = 1.055 seconds 00:19:13.233 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:13.233 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.233 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:13.233 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.233 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:13.233 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:13.233 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:13.233 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:13.234 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:13.234 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:13.234 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:13.234 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:13.234 rmmod nvme_tcp 00:19:13.234 rmmod nvme_fabrics 00:19:13.234 rmmod nvme_keyring 00:19:13.234 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:13.234 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:13.234 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:13.234 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2740176 ']' 00:19:13.234 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2740176 00:19:13.234 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2740176 ']' 00:19:13.234 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2740176 00:19:13.234 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:19:13.495 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:13.495 11:20:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2740176 00:19:13.495 11:20:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:19:13.495 11:20:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:19:13.495 11:20:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2740176' 00:19:13.495 killing process with pid 2740176 00:19:13.495 11:20:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2740176 00:19:13.495 11:20:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2740176 00:19:13.755 11:20:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:13.755 11:20:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:13.755 11:20:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:13.755 11:20:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:13.755 11:20:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:19:13.755 11:20:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:13.755 11:20:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:19:13.755 11:20:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:13.755 11:20:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:13.755 11:20:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:13.755 11:20:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:13.755 11:20:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:16.299 00:19:16.299 real 0m12.768s 00:19:16.299 user 0m15.000s 00:19:16.299 sys 0m6.891s 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:16.299 ************************************ 00:19:16.299 END TEST nvmf_bdevio_no_huge 00:19:16.299 ************************************ 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:16.299 ************************************ 00:19:16.299 START TEST nvmf_tls 00:19:16.299 ************************************ 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:16.299 * Looking for test storage... 00:19:16.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:16.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.299 --rc genhtml_branch_coverage=1 00:19:16.299 --rc genhtml_function_coverage=1 00:19:16.299 --rc genhtml_legend=1 00:19:16.299 --rc geninfo_all_blocks=1 00:19:16.299 --rc geninfo_unexecuted_blocks=1 00:19:16.299 00:19:16.299 ' 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:16.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.299 --rc genhtml_branch_coverage=1 00:19:16.299 --rc genhtml_function_coverage=1 00:19:16.299 --rc genhtml_legend=1 00:19:16.299 --rc geninfo_all_blocks=1 00:19:16.299 --rc geninfo_unexecuted_blocks=1 00:19:16.299 00:19:16.299 ' 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:16.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.299 --rc genhtml_branch_coverage=1 00:19:16.299 --rc genhtml_function_coverage=1 00:19:16.299 --rc genhtml_legend=1 00:19:16.299 --rc geninfo_all_blocks=1 00:19:16.299 --rc geninfo_unexecuted_blocks=1 00:19:16.299 00:19:16.299 ' 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:16.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.299 --rc genhtml_branch_coverage=1 00:19:16.299 --rc genhtml_function_coverage=1 00:19:16.299 --rc genhtml_legend=1 00:19:16.299 --rc geninfo_all_blocks=1 00:19:16.299 --rc geninfo_unexecuted_blocks=1 00:19:16.299 00:19:16.299 ' 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:16.299 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:16.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:16.300 11:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:24.444 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:24.444 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:24.444 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:24.445 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:24.445 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:24.445 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:24.445 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:24.445 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:24.445 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:24.445 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:24.445 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:24.445 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:24.445 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:24.445 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:24.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:24.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:19:24.445 00:19:24.445 --- 10.0.0.2 ping statistics --- 00:19:24.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.445 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:19:24.445 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:24.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:24.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:19:24.445 00:19:24.445 --- 10.0.0.1 ping statistics --- 00:19:24.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.445 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:19:24.445 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:24.445 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:24.445 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:24.445 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:24.445 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:24.445 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:24.445 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:24.445 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:24.445 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:24.445 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:24.445 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:24.445 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:24.445 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.445 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2744984 00:19:24.445 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2744984 00:19:24.445 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:24.445 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2744984 ']' 00:19:24.445 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.445 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.445 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.445 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.445 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.445 [2024-11-20 11:20:16.395011] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:19:24.446 [2024-11-20 11:20:16.395078] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.446 [2024-11-20 11:20:16.495413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.446 [2024-11-20 11:20:16.546402] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.446 [2024-11-20 11:20:16.546452] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.446 [2024-11-20 11:20:16.546461] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:24.446 [2024-11-20 11:20:16.546468] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:24.446 [2024-11-20 11:20:16.546479] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.446 [2024-11-20 11:20:16.547258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.707 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:24.707 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:24.707 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:24.707 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:24.707 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.707 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.707 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:24.707 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:24.707 true 00:19:24.968 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:24.968 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:24.968 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:24.968 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:24.968 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:25.230 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:25.230 11:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:25.492 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:25.492 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:25.492 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:25.492 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:25.492 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:25.753 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:25.753 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:25.753 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:25.753 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:26.014 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:26.014 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:26.014 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:26.275 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:26.275 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:26.275 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:26.275 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:26.275 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:26.535 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:26.535 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:26.796 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:26.796 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:26.796 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:26.796 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:26.796 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:26.796 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:26.796 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:26.796 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:26.796 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:26.796 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:26.796 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:26.796 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:26.796 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:26.796 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:26.796 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:26.796 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:26.796 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:26.796 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:26.796 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:26.796 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.MATpjaNoxZ 00:19:26.796 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:26.796 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.uxAoTssTxJ 00:19:26.796 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:26.796 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:26.796 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.MATpjaNoxZ 00:19:26.796 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.uxAoTssTxJ 00:19:26.796 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:27.057 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:27.318 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.MATpjaNoxZ 00:19:27.318 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.MATpjaNoxZ 00:19:27.318 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:27.318 [2024-11-20 11:20:20.048745] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:27.577 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:27.577 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:27.837 [2024-11-20 11:20:20.385571] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:27.837 [2024-11-20 11:20:20.385782] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:27.837 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:27.837 malloc0 00:19:27.837 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:28.098 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.MATpjaNoxZ 00:19:28.359 11:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:28.359 11:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.MATpjaNoxZ 00:19:40.583 Initializing NVMe Controllers 00:19:40.583 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:40.583 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:40.583 Initialization complete. Launching workers. 00:19:40.583 ======================================================== 00:19:40.583 Latency(us) 00:19:40.583 Device Information : IOPS MiB/s Average min max 00:19:40.583 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18472.46 72.16 3464.85 1158.96 4130.91 00:19:40.583 ======================================================== 00:19:40.583 Total : 18472.46 72.16 3464.85 1158.96 4130.91 00:19:40.583 00:19:40.583 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MATpjaNoxZ 00:19:40.583 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:40.583 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:40.583 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:40.583 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.MATpjaNoxZ 00:19:40.583 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:40.583 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2747839 00:19:40.583 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:40.583 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2747839 /var/tmp/bdevperf.sock 00:19:40.583 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:40.583 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2747839 ']' 00:19:40.583 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:40.583 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:40.583 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:40.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:40.583 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:40.583 11:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.583 [2024-11-20 11:20:31.271346] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:19:40.583 [2024-11-20 11:20:31.271404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2747839 ] 00:19:40.583 [2024-11-20 11:20:31.361227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.583 [2024-11-20 11:20:31.396658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.583 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.583 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:40.583 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.MATpjaNoxZ 00:19:40.583 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:40.583 [2024-11-20 11:20:32.360212] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:40.583 TLSTESTn1 00:19:40.583 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:40.583 Running I/O for 10 seconds... 00:19:42.227 5464.00 IOPS, 21.34 MiB/s [2024-11-20T10:20:35.626Z] 4788.00 IOPS, 18.70 MiB/s [2024-11-20T10:20:36.605Z] 5026.67 IOPS, 19.64 MiB/s [2024-11-20T10:20:37.988Z] 5371.25 IOPS, 20.98 MiB/s [2024-11-20T10:20:38.559Z] 5488.80 IOPS, 21.44 MiB/s [2024-11-20T10:20:39.942Z] 5445.00 IOPS, 21.27 MiB/s [2024-11-20T10:20:40.884Z] 5523.00 IOPS, 21.57 MiB/s [2024-11-20T10:20:41.825Z] 5560.12 IOPS, 21.72 MiB/s [2024-11-20T10:20:42.765Z] 5555.33 IOPS, 21.70 MiB/s [2024-11-20T10:20:42.765Z] 5462.40 IOPS, 21.34 MiB/s 00:19:50.023 Latency(us) 00:19:50.023 [2024-11-20T10:20:42.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.023 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:50.023 Verification LBA range: start 0x0 length 0x2000 00:19:50.023 TLSTESTn1 : 10.01 5468.14 21.36 0.00 0.00 23373.76 4778.67 72963.41 00:19:50.023 [2024-11-20T10:20:42.765Z] =================================================================================================================== 00:19:50.023 [2024-11-20T10:20:42.765Z] Total : 5468.14 21.36 0.00 0.00 23373.76 4778.67 72963.41 00:19:50.023 { 00:19:50.023 "results": [ 00:19:50.023 { 00:19:50.023 "job": "TLSTESTn1", 00:19:50.023 "core_mask": "0x4", 00:19:50.023 "workload": "verify", 00:19:50.023 "status": "finished", 00:19:50.023 "verify_range": { 00:19:50.023 "start": 0, 00:19:50.023 "length": 8192 00:19:50.023 }, 00:19:50.023 "queue_depth": 128, 00:19:50.023 "io_size": 4096, 00:19:50.023 "runtime": 10.01273, 00:19:50.023 "iops": 5468.139058977921, 00:19:50.023 "mibps": 21.359918199132505, 00:19:50.023 "io_failed": 0, 00:19:50.023 "io_timeout": 0, 00:19:50.023 "avg_latency_us": 23373.762791790712, 00:19:50.023 "min_latency_us": 4778.666666666667, 00:19:50.023 "max_latency_us": 72963.41333333333 00:19:50.023 } 00:19:50.023 ], 00:19:50.023 "core_count": 1 00:19:50.023 } 00:19:50.023 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:50.023 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2747839 00:19:50.023 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2747839 ']' 00:19:50.023 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2747839 00:19:50.023 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:50.023 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:50.023 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2747839 00:19:50.023 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:50.023 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:50.023 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2747839' 00:19:50.023 killing process with pid 2747839 00:19:50.023 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2747839 00:19:50.023 Received shutdown signal, test time was about 10.000000 seconds 00:19:50.023 00:19:50.023 Latency(us) 00:19:50.023 [2024-11-20T10:20:42.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.023 [2024-11-20T10:20:42.765Z] =================================================================================================================== 00:19:50.023 [2024-11-20T10:20:42.765Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:50.023 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2747839 00:19:50.283 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uxAoTssTxJ 00:19:50.283 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:50.283 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uxAoTssTxJ 00:19:50.283 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:50.283 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:50.283 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:50.283 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:50.283 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uxAoTssTxJ 00:19:50.283 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:50.283 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:50.283 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:50.283 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.uxAoTssTxJ 00:19:50.283 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:50.283 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2750186 00:19:50.283 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:50.283 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2750186 /var/tmp/bdevperf.sock 00:19:50.283 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:50.283 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2750186 ']' 00:19:50.283 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.283 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.283 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.283 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.283 11:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.283 [2024-11-20 11:20:42.820360] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:19:50.283 [2024-11-20 11:20:42.820416] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2750186 ] 00:19:50.283 [2024-11-20 11:20:42.903833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.283 [2024-11-20 11:20:42.932430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.224 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:51.224 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:51.224 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.uxAoTssTxJ 00:19:51.224 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:51.224 [2024-11-20 11:20:43.926999] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:51.224 [2024-11-20 11:20:43.934028] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:51.224 [2024-11-20 11:20:43.935121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7bbb0 (107): Transport endpoint is not connected 00:19:51.224 [2024-11-20 11:20:43.936117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7bbb0 (9): Bad file descriptor 00:19:51.224 [2024-11-20 11:20:43.937118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:51.224 [2024-11-20 11:20:43.937125] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:51.224 [2024-11-20 11:20:43.937130] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:51.224 [2024-11-20 11:20:43.937139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:51.224 request: 00:19:51.224 { 00:19:51.224 "name": "TLSTEST", 00:19:51.224 "trtype": "tcp", 00:19:51.224 "traddr": "10.0.0.2", 00:19:51.224 "adrfam": "ipv4", 00:19:51.224 "trsvcid": "4420", 00:19:51.224 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.224 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:51.224 "prchk_reftag": false, 00:19:51.224 "prchk_guard": false, 00:19:51.224 "hdgst": false, 00:19:51.224 "ddgst": false, 00:19:51.224 "psk": "key0", 00:19:51.224 "allow_unrecognized_csi": false, 00:19:51.224 "method": "bdev_nvme_attach_controller", 00:19:51.224 "req_id": 1 00:19:51.224 } 00:19:51.224 Got JSON-RPC error response 00:19:51.224 response: 00:19:51.224 { 00:19:51.224 "code": -5, 00:19:51.224 "message": "Input/output error" 00:19:51.224 } 00:19:51.224 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2750186 00:19:51.224 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2750186 ']' 00:19:51.224 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2750186 00:19:51.224 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:51.224 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:51.224 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2750186 00:19:51.485 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:51.485 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:51.485 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2750186' 00:19:51.485 killing process with pid 2750186 00:19:51.485 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2750186 00:19:51.485 Received shutdown signal, test time was about 10.000000 seconds 00:19:51.485 00:19:51.485 Latency(us) 00:19:51.485 [2024-11-20T10:20:44.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.485 [2024-11-20T10:20:44.227Z] =================================================================================================================== 00:19:51.485 [2024-11-20T10:20:44.227Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:51.485 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2750186 00:19:51.485 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:51.485 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:51.485 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:51.485 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:51.485 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:51.485 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.MATpjaNoxZ 00:19:51.485 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:51.485 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.MATpjaNoxZ 00:19:51.485 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:51.485 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:51.485 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:51.485 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:51.485 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.MATpjaNoxZ 00:19:51.485 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:51.485 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:51.485 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:51.485 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.MATpjaNoxZ 00:19:51.485 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:51.485 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2750382 00:19:51.485 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:51.485 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2750382 /var/tmp/bdevperf.sock 00:19:51.485 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:51.485 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2750382 ']' 00:19:51.485 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:51.485 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:51.485 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:51.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:51.485 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:51.485 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.485 [2024-11-20 11:20:44.166305] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:19:51.485 [2024-11-20 11:20:44.166366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2750382 ] 00:19:51.746 [2024-11-20 11:20:44.248042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.746 [2024-11-20 11:20:44.277865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:52.318 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:52.318 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:52.318 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.MATpjaNoxZ 00:19:52.578 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:52.578 [2024-11-20 11:20:45.264220] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:52.579 [2024-11-20 11:20:45.268585] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:52.579 [2024-11-20 11:20:45.268605] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:52.579 [2024-11-20 11:20:45.268624] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:52.579 [2024-11-20 11:20:45.269313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe40bb0 (107): Transport endpoint is not connected 00:19:52.579 [2024-11-20 11:20:45.270307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe40bb0 (9): Bad file descriptor 00:19:52.579 [2024-11-20 11:20:45.271309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:52.579 [2024-11-20 11:20:45.271316] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:52.579 [2024-11-20 11:20:45.271322] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:52.579 [2024-11-20 11:20:45.271330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:52.579 request: 00:19:52.579 { 00:19:52.579 "name": "TLSTEST", 00:19:52.579 "trtype": "tcp", 00:19:52.579 "traddr": "10.0.0.2", 00:19:52.579 "adrfam": "ipv4", 00:19:52.579 "trsvcid": "4420", 00:19:52.579 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.579 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:52.579 "prchk_reftag": false, 00:19:52.579 "prchk_guard": false, 00:19:52.579 "hdgst": false, 00:19:52.579 "ddgst": false, 00:19:52.579 "psk": "key0", 00:19:52.579 "allow_unrecognized_csi": false, 00:19:52.579 "method": "bdev_nvme_attach_controller", 00:19:52.579 "req_id": 1 00:19:52.579 } 00:19:52.579 Got JSON-RPC error response 00:19:52.579 response: 00:19:52.579 { 00:19:52.579 "code": -5, 00:19:52.579 "message": "Input/output error" 00:19:52.579 } 00:19:52.579 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2750382 00:19:52.579 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2750382 ']' 00:19:52.579 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2750382 00:19:52.579 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:52.579 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:52.579 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2750382 00:19:52.838 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:52.838 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:52.838 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2750382' 00:19:52.838 killing process with pid 2750382 00:19:52.838 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2750382 00:19:52.838 Received shutdown signal, test time was about 10.000000 seconds 00:19:52.838 00:19:52.838 Latency(us) 00:19:52.838 [2024-11-20T10:20:45.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.838 [2024-11-20T10:20:45.580Z] =================================================================================================================== 00:19:52.838 [2024-11-20T10:20:45.580Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:52.838 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2750382 00:19:52.838 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:52.838 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:52.838 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:52.838 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:52.838 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:52.838 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.MATpjaNoxZ 00:19:52.838 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:52.838 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.MATpjaNoxZ 00:19:52.838 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:52.838 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:52.838 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:52.838 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:52.838 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.MATpjaNoxZ 00:19:52.838 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:52.838 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:52.838 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:52.838 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.MATpjaNoxZ 00:19:52.838 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:52.838 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2750562 00:19:52.838 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:52.838 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2750562 /var/tmp/bdevperf.sock 00:19:52.838 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:52.838 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2750562 ']' 00:19:52.838 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:52.838 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:52.838 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:52.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:52.838 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:52.838 11:20:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.838 [2024-11-20 11:20:45.494791] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:19:52.838 [2024-11-20 11:20:45.494850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2750562 ] 00:19:53.098 [2024-11-20 11:20:45.578217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.098 [2024-11-20 11:20:45.606979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.668 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:53.668 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:53.668 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.MATpjaNoxZ 00:19:53.929 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:53.929 [2024-11-20 11:20:46.629824] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:53.929 [2024-11-20 11:20:46.640542] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:53.929 [2024-11-20 11:20:46.640561] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:53.929 [2024-11-20 11:20:46.640579] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:53.929 [2024-11-20 11:20:46.640885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb52bb0 (107): Transport endpoint is not connected 00:19:53.929 [2024-11-20 11:20:46.641881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb52bb0 (9): Bad file descriptor 00:19:53.929 [2024-11-20 11:20:46.642883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:53.929 [2024-11-20 11:20:46.642890] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:53.929 [2024-11-20 11:20:46.642895] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:53.929 [2024-11-20 11:20:46.642903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:53.929 request: 00:19:53.929 { 00:19:53.929 "name": "TLSTEST", 00:19:53.929 "trtype": "tcp", 00:19:53.929 "traddr": "10.0.0.2", 00:19:53.929 "adrfam": "ipv4", 00:19:53.929 "trsvcid": "4420", 00:19:53.929 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:53.929 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:53.929 "prchk_reftag": false, 00:19:53.929 "prchk_guard": false, 00:19:53.929 "hdgst": false, 00:19:53.929 "ddgst": false, 00:19:53.929 "psk": "key0", 00:19:53.929 "allow_unrecognized_csi": false, 00:19:53.929 "method": "bdev_nvme_attach_controller", 00:19:53.929 "req_id": 1 00:19:53.929 } 00:19:53.929 Got JSON-RPC error response 00:19:53.929 response: 00:19:53.929 { 00:19:53.929 "code": -5, 00:19:53.929 "message": "Input/output error" 00:19:53.929 } 00:19:54.190 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2750562 00:19:54.190 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2750562 ']' 00:19:54.190 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2750562 00:19:54.190 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:54.190 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:54.190 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2750562 00:19:54.190 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:54.190 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:54.190 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2750562' 00:19:54.190 killing process with pid 2750562 00:19:54.190 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2750562 00:19:54.190 Received shutdown signal, test time was about 10.000000 seconds 00:19:54.190 00:19:54.190 Latency(us) 00:19:54.190 [2024-11-20T10:20:46.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.190 [2024-11-20T10:20:46.932Z] =================================================================================================================== 00:19:54.190 [2024-11-20T10:20:46.932Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:54.190 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2750562 00:19:54.190 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:54.190 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:54.190 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:54.190 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:54.190 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:54.190 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:54.190 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:54.190 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:54.190 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:54.190 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:54.190 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:54.191 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:54.191 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:54.191 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:54.191 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:54.191 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:54.191 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:54.191 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:54.191 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2750890 00:19:54.191 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:54.191 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2750890 /var/tmp/bdevperf.sock 00:19:54.191 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:54.191 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2750890 ']' 00:19:54.191 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:54.191 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:54.191 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:54.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:54.191 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:54.191 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.191 [2024-11-20 11:20:46.885415] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:19:54.191 [2024-11-20 11:20:46.885470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2750890 ] 00:19:54.451 [2024-11-20 11:20:46.967898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.451 [2024-11-20 11:20:46.998061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:55.023 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:55.023 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:55.023 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:55.283 [2024-11-20 11:20:47.840228] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:55.283 [2024-11-20 11:20:47.840250] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:55.283 request: 00:19:55.283 { 00:19:55.283 "name": "key0", 00:19:55.283 "path": "", 00:19:55.283 "method": "keyring_file_add_key", 00:19:55.283 "req_id": 1 00:19:55.283 } 00:19:55.283 Got JSON-RPC error response 00:19:55.283 response: 00:19:55.283 { 00:19:55.283 "code": -1, 00:19:55.283 "message": "Operation not permitted" 00:19:55.283 } 00:19:55.283 11:20:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:55.542 [2024-11-20 11:20:48.024767] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:55.542 [2024-11-20 11:20:48.024789] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:55.542 request: 00:19:55.542 { 00:19:55.542 "name": "TLSTEST", 00:19:55.542 "trtype": "tcp", 00:19:55.542 "traddr": "10.0.0.2", 00:19:55.542 "adrfam": "ipv4", 00:19:55.542 "trsvcid": "4420", 00:19:55.542 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.542 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:55.542 "prchk_reftag": false, 00:19:55.542 "prchk_guard": false, 00:19:55.542 "hdgst": false, 00:19:55.542 "ddgst": false, 00:19:55.542 "psk": "key0", 00:19:55.542 "allow_unrecognized_csi": false, 00:19:55.542 "method": "bdev_nvme_attach_controller", 00:19:55.542 "req_id": 1 00:19:55.542 } 00:19:55.542 Got JSON-RPC error response 00:19:55.542 response: 00:19:55.542 { 00:19:55.542 "code": -126, 00:19:55.542 "message": "Required key not available" 00:19:55.542 } 00:19:55.542 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2750890 00:19:55.542 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2750890 ']' 00:19:55.542 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2750890 00:19:55.542 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:55.542 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:55.542 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2750890 00:19:55.542 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:55.542 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:55.542 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2750890' 00:19:55.542 killing process with pid 2750890 00:19:55.542 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2750890 00:19:55.542 Received shutdown signal, test time was about 10.000000 seconds 00:19:55.542 00:19:55.542 Latency(us) 00:19:55.542 [2024-11-20T10:20:48.284Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.542 [2024-11-20T10:20:48.284Z] =================================================================================================================== 00:19:55.542 [2024-11-20T10:20:48.284Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:55.542 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2750890 00:19:55.542 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:55.542 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:55.543 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:55.543 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:55.543 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:55.543 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2744984 00:19:55.543 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2744984 ']' 00:19:55.543 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2744984 00:19:55.543 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:55.543 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:55.543 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2744984 00:19:55.543 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:55.543 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:55.543 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2744984' 00:19:55.543 killing process with pid 2744984 00:19:55.543 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2744984 00:19:55.543 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2744984 00:19:55.801 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:55.801 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:55.801 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:55.801 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:55.801 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:55.801 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:19:55.801 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:55.801 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:55.801 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:55.801 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.tyI0Y1O7iz 00:19:55.801 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:55.801 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.tyI0Y1O7iz 00:19:55.802 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:55.802 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:55.802 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:55.802 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.802 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2751249 00:19:55.802 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:55.802 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2751249 00:19:55.802 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2751249 ']' 00:19:55.802 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.802 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:55.802 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.802 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:55.802 11:20:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.802 [2024-11-20 11:20:48.490841] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:19:55.802 [2024-11-20 11:20:48.490893] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:56.061 [2024-11-20 11:20:48.581805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.061 [2024-11-20 11:20:48.610602] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:56.061 [2024-11-20 11:20:48.610633] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:56.061 [2024-11-20 11:20:48.610640] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:56.061 [2024-11-20 11:20:48.610645] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:56.061 [2024-11-20 11:20:48.610649] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:56.061 [2024-11-20 11:20:48.611146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.635 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:56.635 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:56.635 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:56.635 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:56.635 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.635 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:56.635 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.tyI0Y1O7iz 00:19:56.635 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.tyI0Y1O7iz 00:19:56.635 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:56.897 [2024-11-20 11:20:49.494581] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:56.897 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:57.158 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:57.158 [2024-11-20 11:20:49.831412] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:57.158 [2024-11-20 11:20:49.831615] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:57.158 11:20:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:57.418 malloc0 00:19:57.418 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:57.677 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.tyI0Y1O7iz 00:19:57.677 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:57.938 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tyI0Y1O7iz 00:19:57.938 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:57.938 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:57.938 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:57.938 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.tyI0Y1O7iz 00:19:57.938 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:57.938 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2751616 00:19:57.938 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:57.938 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2751616 /var/tmp/bdevperf.sock 00:19:57.938 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:57.938 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2751616 ']' 00:19:57.938 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:57.938 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:57.938 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:57.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:57.938 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:57.938 11:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.938 [2024-11-20 11:20:50.576407] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:19:57.938 [2024-11-20 11:20:50.576460] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2751616 ] 00:19:57.938 [2024-11-20 11:20:50.660124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.198 [2024-11-20 11:20:50.689206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:58.769 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:58.769 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:58.769 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.tyI0Y1O7iz 00:19:59.030 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:59.030 [2024-11-20 11:20:51.675848] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:59.030 TLSTESTn1 00:19:59.291 11:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:59.291 Running I/O for 10 seconds... 00:20:01.173 4826.00 IOPS, 18.85 MiB/s [2024-11-20T10:20:55.299Z] 4862.00 IOPS, 18.99 MiB/s [2024-11-20T10:20:55.869Z] 4998.33 IOPS, 19.52 MiB/s [2024-11-20T10:20:57.252Z] 5166.75 IOPS, 20.18 MiB/s [2024-11-20T10:20:58.196Z] 5395.00 IOPS, 21.07 MiB/s [2024-11-20T10:20:59.137Z] 5422.00 IOPS, 21.18 MiB/s [2024-11-20T10:21:00.078Z] 5435.29 IOPS, 21.23 MiB/s [2024-11-20T10:21:01.021Z] 5524.50 IOPS, 21.58 MiB/s [2024-11-20T10:21:01.962Z] 5566.56 IOPS, 21.74 MiB/s [2024-11-20T10:21:01.962Z] 5609.60 IOPS, 21.91 MiB/s 00:20:09.220 Latency(us) 00:20:09.220 [2024-11-20T10:21:01.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.220 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:09.220 Verification LBA range: start 0x0 length 0x2000 00:20:09.220 TLSTESTn1 : 10.02 5610.58 21.92 0.00 0.00 22778.52 5324.80 24794.45 00:20:09.220 [2024-11-20T10:21:01.962Z] =================================================================================================================== 00:20:09.220 [2024-11-20T10:21:01.962Z] Total : 5610.58 21.92 0.00 0.00 22778.52 5324.80 24794.45 00:20:09.220 { 00:20:09.220 "results": [ 00:20:09.220 { 00:20:09.220 "job": "TLSTESTn1", 00:20:09.220 "core_mask": "0x4", 00:20:09.220 "workload": "verify", 00:20:09.220 "status": "finished", 00:20:09.220 "verify_range": { 00:20:09.220 "start": 0, 00:20:09.220 "length": 8192 00:20:09.220 }, 00:20:09.220 "queue_depth": 128, 00:20:09.220 "io_size": 4096, 00:20:09.220 "runtime": 10.020883, 00:20:09.220 "iops": 5610.58341864684, 00:20:09.220 "mibps": 21.91634147908922, 00:20:09.220 "io_failed": 0, 00:20:09.220 "io_timeout": 0, 00:20:09.220 "avg_latency_us": 22778.52068418026, 00:20:09.220 "min_latency_us": 5324.8, 00:20:09.220 "max_latency_us": 24794.453333333335 00:20:09.220 } 00:20:09.220 ], 00:20:09.220 "core_count": 1 00:20:09.220 } 00:20:09.221 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:09.221 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2751616 00:20:09.221 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2751616 ']' 00:20:09.221 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2751616 00:20:09.221 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:09.221 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:09.221 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2751616 00:20:09.482 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:09.482 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:09.482 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2751616' 00:20:09.482 killing process with pid 2751616 00:20:09.482 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2751616 00:20:09.482 Received shutdown signal, test time was about 10.000000 seconds 00:20:09.482 00:20:09.482 Latency(us) 00:20:09.482 [2024-11-20T10:21:02.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.482 [2024-11-20T10:21:02.224Z] =================================================================================================================== 00:20:09.482 [2024-11-20T10:21:02.224Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:09.482 11:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2751616 00:20:09.482 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.tyI0Y1O7iz 00:20:09.482 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tyI0Y1O7iz 00:20:09.482 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:09.482 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tyI0Y1O7iz 00:20:09.482 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:09.482 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:09.482 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:09.482 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:09.482 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tyI0Y1O7iz 00:20:09.482 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:09.482 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:09.482 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:09.482 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.tyI0Y1O7iz 00:20:09.482 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:09.482 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2754062 00:20:09.482 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:09.482 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2754062 /var/tmp/bdevperf.sock 00:20:09.482 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:09.482 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2754062 ']' 00:20:09.482 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:09.482 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:09.482 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:09.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:09.482 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:09.482 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.482 [2024-11-20 11:21:02.157420] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:20:09.482 [2024-11-20 11:21:02.157478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2754062 ] 00:20:09.744 [2024-11-20 11:21:02.242193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.744 [2024-11-20 11:21:02.270525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:10.315 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:10.315 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:10.315 11:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.tyI0Y1O7iz 00:20:10.576 [2024-11-20 11:21:03.096439] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.tyI0Y1O7iz': 0100666 00:20:10.576 [2024-11-20 11:21:03.096465] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:10.576 request: 00:20:10.576 { 00:20:10.576 "name": "key0", 00:20:10.576 "path": "/tmp/tmp.tyI0Y1O7iz", 00:20:10.576 "method": "keyring_file_add_key", 00:20:10.576 "req_id": 1 00:20:10.576 } 00:20:10.576 Got JSON-RPC error response 00:20:10.576 response: 00:20:10.576 { 00:20:10.576 "code": -1, 00:20:10.576 "message": "Operation not permitted" 00:20:10.576 } 00:20:10.576 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:10.576 [2024-11-20 11:21:03.280974] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:10.576 [2024-11-20 11:21:03.280997] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:10.576 request: 00:20:10.576 { 00:20:10.576 "name": "TLSTEST", 00:20:10.576 "trtype": "tcp", 00:20:10.576 "traddr": "10.0.0.2", 00:20:10.576 "adrfam": "ipv4", 00:20:10.576 "trsvcid": "4420", 00:20:10.576 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.576 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:10.576 "prchk_reftag": false, 00:20:10.576 "prchk_guard": false, 00:20:10.576 "hdgst": false, 00:20:10.576 "ddgst": false, 00:20:10.576 "psk": "key0", 00:20:10.576 "allow_unrecognized_csi": false, 00:20:10.576 "method": "bdev_nvme_attach_controller", 00:20:10.576 "req_id": 1 00:20:10.576 } 00:20:10.576 Got JSON-RPC error response 00:20:10.576 response: 00:20:10.576 { 00:20:10.576 "code": -126, 00:20:10.576 "message": "Required key not available" 00:20:10.576 } 00:20:10.576 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2754062 00:20:10.576 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2754062 ']' 00:20:10.576 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2754062 00:20:10.576 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:10.837 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:10.837 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2754062 00:20:10.837 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:10.837 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:10.837 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2754062' 00:20:10.837 killing process with pid 2754062 00:20:10.838 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2754062 00:20:10.838 Received shutdown signal, test time was about 10.000000 seconds 00:20:10.838 00:20:10.838 Latency(us) 00:20:10.838 [2024-11-20T10:21:03.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.838 [2024-11-20T10:21:03.580Z] =================================================================================================================== 00:20:10.838 [2024-11-20T10:21:03.580Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:10.838 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2754062 00:20:10.838 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:10.838 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:10.838 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:10.838 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:10.838 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:10.838 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2751249 00:20:10.838 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2751249 ']' 00:20:10.838 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2751249 00:20:10.838 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:10.838 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:10.838 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2751249 00:20:10.838 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:10.838 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:10.838 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2751249' 00:20:10.838 killing process with pid 2751249 00:20:10.838 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2751249 00:20:10.838 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2751249 00:20:11.098 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:20:11.099 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:11.099 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:11.099 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.099 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2754409 00:20:11.099 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2754409 00:20:11.099 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:11.099 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2754409 ']' 00:20:11.099 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.099 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:11.099 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.099 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:11.099 11:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.099 [2024-11-20 11:21:03.700760] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:20:11.099 [2024-11-20 11:21:03.700815] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.099 [2024-11-20 11:21:03.789991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.099 [2024-11-20 11:21:03.825802] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.099 [2024-11-20 11:21:03.825842] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.099 [2024-11-20 11:21:03.825848] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:11.099 [2024-11-20 11:21:03.825853] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:11.099 [2024-11-20 11:21:03.825858] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.099 [2024-11-20 11:21:03.826378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:12.038 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:12.038 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:12.038 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:12.038 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:12.038 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.038 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.038 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.tyI0Y1O7iz 00:20:12.038 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:12.038 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.tyI0Y1O7iz 00:20:12.038 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:20:12.038 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:12.038 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:20:12.038 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:12.038 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.tyI0Y1O7iz 00:20:12.038 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.tyI0Y1O7iz 00:20:12.038 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:12.038 [2024-11-20 11:21:04.709121] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.038 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:12.299 11:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:12.299 [2024-11-20 11:21:05.037922] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:12.299 [2024-11-20 11:21:05.038127] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:12.559 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:12.559 malloc0 00:20:12.559 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:12.819 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.tyI0Y1O7iz 00:20:12.819 [2024-11-20 11:21:05.525021] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.tyI0Y1O7iz': 0100666 00:20:12.819 [2024-11-20 11:21:05.525044] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:12.819 request: 00:20:12.819 { 00:20:12.819 "name": "key0", 00:20:12.819 "path": "/tmp/tmp.tyI0Y1O7iz", 00:20:12.819 "method": "keyring_file_add_key", 00:20:12.819 "req_id": 1 00:20:12.819 } 00:20:12.819 Got JSON-RPC error response 00:20:12.819 response: 00:20:12.819 { 00:20:12.819 "code": -1, 00:20:12.819 "message": "Operation not permitted" 00:20:12.819 } 00:20:12.819 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:13.079 [2024-11-20 11:21:05.693459] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:20:13.079 [2024-11-20 11:21:05.693488] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:13.079 request: 00:20:13.079 { 00:20:13.079 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.079 "host": "nqn.2016-06.io.spdk:host1", 00:20:13.079 "psk": "key0", 00:20:13.079 "method": "nvmf_subsystem_add_host", 00:20:13.079 "req_id": 1 00:20:13.079 } 00:20:13.079 Got JSON-RPC error response 00:20:13.079 response: 00:20:13.079 { 00:20:13.079 "code": -32603, 00:20:13.079 "message": "Internal error" 00:20:13.079 } 00:20:13.079 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:13.079 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:13.079 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:13.079 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:13.079 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2754409 00:20:13.079 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2754409 ']' 00:20:13.079 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2754409 00:20:13.079 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:13.079 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:13.079 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2754409 00:20:13.079 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:13.079 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:13.079 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2754409' 00:20:13.079 killing process with pid 2754409 00:20:13.079 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2754409 00:20:13.079 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2754409 00:20:13.339 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.tyI0Y1O7iz 00:20:13.339 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:20:13.339 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:13.339 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:13.339 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.339 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2754789 00:20:13.339 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2754789 00:20:13.340 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:13.340 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2754789 ']' 00:20:13.340 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.340 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:13.340 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.340 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:13.340 11:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.340 [2024-11-20 11:21:05.952987] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:20:13.340 [2024-11-20 11:21:05.953042] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.340 [2024-11-20 11:21:06.043662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.340 [2024-11-20 11:21:06.072154] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.340 [2024-11-20 11:21:06.072185] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.340 [2024-11-20 11:21:06.072191] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:13.340 [2024-11-20 11:21:06.072195] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:13.340 [2024-11-20 11:21:06.072199] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.340 [2024-11-20 11:21:06.072636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.281 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:14.281 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:14.281 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:14.281 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:14.281 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.281 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.281 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.tyI0Y1O7iz 00:20:14.281 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.tyI0Y1O7iz 00:20:14.281 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:14.281 [2024-11-20 11:21:06.923659] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.281 11:21:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:14.541 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:14.541 [2024-11-20 11:21:07.244445] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:14.541 [2024-11-20 11:21:07.244643] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.541 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:14.801 malloc0 00:20:14.801 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:15.062 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.tyI0Y1O7iz 00:20:15.062 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:15.323 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2755156 00:20:15.323 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:15.323 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:15.323 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2755156 /var/tmp/bdevperf.sock 00:20:15.323 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2755156 ']' 00:20:15.323 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:15.323 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:15.323 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:15.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:15.323 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:15.323 11:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.323 [2024-11-20 11:21:07.986261] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:20:15.323 [2024-11-20 11:21:07.986314] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2755156 ] 00:20:15.584 [2024-11-20 11:21:08.071202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.584 [2024-11-20 11:21:08.100365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:16.155 11:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:16.155 11:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:16.155 11:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.tyI0Y1O7iz 00:20:16.415 11:21:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:16.415 [2024-11-20 11:21:09.082836] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:16.675 TLSTESTn1 00:20:16.675 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:16.935 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:16.935 "subsystems": [ 00:20:16.935 { 00:20:16.935 "subsystem": "keyring", 00:20:16.935 "config": [ 00:20:16.935 { 00:20:16.935 "method": "keyring_file_add_key", 00:20:16.935 "params": { 00:20:16.935 "name": "key0", 00:20:16.935 "path": "/tmp/tmp.tyI0Y1O7iz" 00:20:16.935 } 00:20:16.935 } 00:20:16.935 ] 00:20:16.935 }, 00:20:16.935 { 00:20:16.935 "subsystem": "iobuf", 00:20:16.935 "config": [ 00:20:16.935 { 00:20:16.935 "method": "iobuf_set_options", 00:20:16.935 "params": { 00:20:16.935 "small_pool_count": 8192, 00:20:16.935 "large_pool_count": 1024, 00:20:16.935 "small_bufsize": 8192, 00:20:16.935 "large_bufsize": 135168, 00:20:16.935 "enable_numa": false 00:20:16.935 } 00:20:16.935 } 00:20:16.935 ] 00:20:16.935 }, 00:20:16.935 { 00:20:16.935 "subsystem": "sock", 00:20:16.935 "config": [ 00:20:16.935 { 00:20:16.935 "method": "sock_set_default_impl", 00:20:16.935 "params": { 00:20:16.935 "impl_name": "posix" 00:20:16.935 } 00:20:16.935 }, 00:20:16.935 { 00:20:16.935 "method": "sock_impl_set_options", 00:20:16.935 "params": { 00:20:16.935 "impl_name": "ssl", 00:20:16.935 "recv_buf_size": 4096, 00:20:16.935 "send_buf_size": 4096, 00:20:16.935 "enable_recv_pipe": true, 00:20:16.935 "enable_quickack": false, 00:20:16.935 "enable_placement_id": 0, 00:20:16.935 "enable_zerocopy_send_server": true, 00:20:16.935 "enable_zerocopy_send_client": false, 00:20:16.935 "zerocopy_threshold": 0, 00:20:16.935 "tls_version": 0, 00:20:16.935 "enable_ktls": false 00:20:16.935 } 00:20:16.935 }, 00:20:16.935 { 00:20:16.935 "method": "sock_impl_set_options", 00:20:16.935 "params": { 00:20:16.935 "impl_name": "posix", 00:20:16.935 "recv_buf_size": 2097152, 00:20:16.935 "send_buf_size": 2097152, 00:20:16.935 "enable_recv_pipe": true, 00:20:16.935 "enable_quickack": false, 00:20:16.935 "enable_placement_id": 0, 00:20:16.935 "enable_zerocopy_send_server": true, 00:20:16.935 "enable_zerocopy_send_client": false, 00:20:16.935 "zerocopy_threshold": 0, 00:20:16.935 "tls_version": 0, 00:20:16.935 "enable_ktls": false 00:20:16.935 } 00:20:16.935 } 00:20:16.935 ] 00:20:16.935 }, 00:20:16.935 { 00:20:16.935 "subsystem": "vmd", 00:20:16.935 "config": [] 00:20:16.935 }, 00:20:16.935 { 00:20:16.935 "subsystem": "accel", 00:20:16.935 "config": [ 00:20:16.935 { 00:20:16.935 "method": "accel_set_options", 00:20:16.935 "params": { 00:20:16.935 "small_cache_size": 128, 00:20:16.935 "large_cache_size": 16, 00:20:16.935 "task_count": 2048, 00:20:16.935 "sequence_count": 2048, 00:20:16.935 "buf_count": 2048 00:20:16.935 } 00:20:16.935 } 00:20:16.935 ] 00:20:16.935 }, 00:20:16.935 { 00:20:16.935 "subsystem": "bdev", 00:20:16.935 "config": [ 00:20:16.935 { 00:20:16.935 "method": "bdev_set_options", 00:20:16.935 "params": { 00:20:16.935 "bdev_io_pool_size": 65535, 00:20:16.935 "bdev_io_cache_size": 256, 00:20:16.935 "bdev_auto_examine": true, 00:20:16.935 "iobuf_small_cache_size": 128, 00:20:16.935 "iobuf_large_cache_size": 16 00:20:16.935 } 00:20:16.935 }, 00:20:16.935 { 00:20:16.935 "method": "bdev_raid_set_options", 00:20:16.935 "params": { 00:20:16.935 "process_window_size_kb": 1024, 00:20:16.935 "process_max_bandwidth_mb_sec": 0 00:20:16.935 } 00:20:16.935 }, 00:20:16.935 { 00:20:16.935 "method": "bdev_iscsi_set_options", 00:20:16.935 "params": { 00:20:16.935 "timeout_sec": 30 00:20:16.935 } 00:20:16.935 }, 00:20:16.935 { 00:20:16.935 "method": "bdev_nvme_set_options", 00:20:16.935 "params": { 00:20:16.935 "action_on_timeout": "none", 00:20:16.935 "timeout_us": 0, 00:20:16.935 "timeout_admin_us": 0, 00:20:16.935 "keep_alive_timeout_ms": 10000, 00:20:16.935 "arbitration_burst": 0, 00:20:16.935 "low_priority_weight": 0, 00:20:16.935 "medium_priority_weight": 0, 00:20:16.935 "high_priority_weight": 0, 00:20:16.935 "nvme_adminq_poll_period_us": 10000, 00:20:16.936 "nvme_ioq_poll_period_us": 0, 00:20:16.936 "io_queue_requests": 0, 00:20:16.936 "delay_cmd_submit": true, 00:20:16.936 "transport_retry_count": 4, 00:20:16.936 "bdev_retry_count": 3, 00:20:16.936 "transport_ack_timeout": 0, 00:20:16.936 "ctrlr_loss_timeout_sec": 0, 00:20:16.936 "reconnect_delay_sec": 0, 00:20:16.936 "fast_io_fail_timeout_sec": 0, 00:20:16.936 "disable_auto_failback": false, 00:20:16.936 "generate_uuids": false, 00:20:16.936 "transport_tos": 0, 00:20:16.936 "nvme_error_stat": false, 00:20:16.936 "rdma_srq_size": 0, 00:20:16.936 "io_path_stat": false, 00:20:16.936 "allow_accel_sequence": false, 00:20:16.936 "rdma_max_cq_size": 0, 00:20:16.936 "rdma_cm_event_timeout_ms": 0, 00:20:16.936 "dhchap_digests": [ 00:20:16.936 "sha256", 00:20:16.936 "sha384", 00:20:16.936 "sha512" 00:20:16.936 ], 00:20:16.936 "dhchap_dhgroups": [ 00:20:16.936 "null", 00:20:16.936 "ffdhe2048", 00:20:16.936 "ffdhe3072", 00:20:16.936 "ffdhe4096", 00:20:16.936 "ffdhe6144", 00:20:16.936 "ffdhe8192" 00:20:16.936 ] 00:20:16.936 } 00:20:16.936 }, 00:20:16.936 { 00:20:16.936 "method": "bdev_nvme_set_hotplug", 00:20:16.936 "params": { 00:20:16.936 "period_us": 100000, 00:20:16.936 "enable": false 00:20:16.936 } 00:20:16.936 }, 00:20:16.936 { 00:20:16.936 "method": "bdev_malloc_create", 00:20:16.936 "params": { 00:20:16.936 "name": "malloc0", 00:20:16.936 "num_blocks": 8192, 00:20:16.936 "block_size": 4096, 00:20:16.936 "physical_block_size": 4096, 00:20:16.936 "uuid": "ddc0a526-6f2d-4ed3-b67c-37dc1daa49d6", 00:20:16.936 "optimal_io_boundary": 0, 00:20:16.936 "md_size": 0, 00:20:16.936 "dif_type": 0, 00:20:16.936 "dif_is_head_of_md": false, 00:20:16.936 "dif_pi_format": 0 00:20:16.936 } 00:20:16.936 }, 00:20:16.936 { 00:20:16.936 "method": "bdev_wait_for_examine" 00:20:16.936 } 00:20:16.936 ] 00:20:16.936 }, 00:20:16.936 { 00:20:16.936 "subsystem": "nbd", 00:20:16.936 "config": [] 00:20:16.936 }, 00:20:16.936 { 00:20:16.936 "subsystem": "scheduler", 00:20:16.936 "config": [ 00:20:16.936 { 00:20:16.936 "method": "framework_set_scheduler", 00:20:16.936 "params": { 00:20:16.936 "name": "static" 00:20:16.936 } 00:20:16.936 } 00:20:16.936 ] 00:20:16.936 }, 00:20:16.936 { 00:20:16.936 "subsystem": "nvmf", 00:20:16.936 "config": [ 00:20:16.936 { 00:20:16.936 "method": "nvmf_set_config", 00:20:16.936 "params": { 00:20:16.936 "discovery_filter": "match_any", 00:20:16.936 "admin_cmd_passthru": { 00:20:16.936 "identify_ctrlr": false 00:20:16.936 }, 00:20:16.936 "dhchap_digests": [ 00:20:16.936 "sha256", 00:20:16.936 "sha384", 00:20:16.936 "sha512" 00:20:16.936 ], 00:20:16.936 "dhchap_dhgroups": [ 00:20:16.936 "null", 00:20:16.936 "ffdhe2048", 00:20:16.936 "ffdhe3072", 00:20:16.936 "ffdhe4096", 00:20:16.936 "ffdhe6144", 00:20:16.936 "ffdhe8192" 00:20:16.936 ] 00:20:16.936 } 00:20:16.936 }, 00:20:16.936 { 00:20:16.936 "method": "nvmf_set_max_subsystems", 00:20:16.936 "params": { 00:20:16.936 "max_subsystems": 1024 00:20:16.936 } 00:20:16.936 }, 00:20:16.936 { 00:20:16.936 "method": "nvmf_set_crdt", 00:20:16.936 "params": { 00:20:16.936 "crdt1": 0, 00:20:16.936 "crdt2": 0, 00:20:16.936 "crdt3": 0 00:20:16.936 } 00:20:16.936 }, 00:20:16.936 { 00:20:16.936 "method": "nvmf_create_transport", 00:20:16.936 "params": { 00:20:16.936 "trtype": "TCP", 00:20:16.936 "max_queue_depth": 128, 00:20:16.936 "max_io_qpairs_per_ctrlr": 127, 00:20:16.936 "in_capsule_data_size": 4096, 00:20:16.936 "max_io_size": 131072, 00:20:16.936 "io_unit_size": 131072, 00:20:16.936 "max_aq_depth": 128, 00:20:16.936 "num_shared_buffers": 511, 00:20:16.936 "buf_cache_size": 4294967295, 00:20:16.936 "dif_insert_or_strip": false, 00:20:16.936 "zcopy": false, 00:20:16.936 "c2h_success": false, 00:20:16.936 "sock_priority": 0, 00:20:16.936 "abort_timeout_sec": 1, 00:20:16.936 "ack_timeout": 0, 00:20:16.936 "data_wr_pool_size": 0 00:20:16.936 } 00:20:16.936 }, 00:20:16.936 { 00:20:16.936 "method": "nvmf_create_subsystem", 00:20:16.936 "params": { 00:20:16.936 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.936 "allow_any_host": false, 00:20:16.936 "serial_number": "SPDK00000000000001", 00:20:16.936 "model_number": "SPDK bdev Controller", 00:20:16.936 "max_namespaces": 10, 00:20:16.936 "min_cntlid": 1, 00:20:16.936 "max_cntlid": 65519, 00:20:16.936 "ana_reporting": false 00:20:16.936 } 00:20:16.936 }, 00:20:16.936 { 00:20:16.936 "method": "nvmf_subsystem_add_host", 00:20:16.936 "params": { 00:20:16.936 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.936 "host": "nqn.2016-06.io.spdk:host1", 00:20:16.936 "psk": "key0" 00:20:16.936 } 00:20:16.936 }, 00:20:16.936 { 00:20:16.936 "method": "nvmf_subsystem_add_ns", 00:20:16.936 "params": { 00:20:16.936 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.936 "namespace": { 00:20:16.936 "nsid": 1, 00:20:16.936 "bdev_name": "malloc0", 00:20:16.936 "nguid": "DDC0A5266F2D4ED3B67C37DC1DAA49D6", 00:20:16.936 "uuid": "ddc0a526-6f2d-4ed3-b67c-37dc1daa49d6", 00:20:16.936 "no_auto_visible": false 00:20:16.936 } 00:20:16.936 } 00:20:16.936 }, 00:20:16.936 { 00:20:16.936 "method": "nvmf_subsystem_add_listener", 00:20:16.936 "params": { 00:20:16.936 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.936 "listen_address": { 00:20:16.936 "trtype": "TCP", 00:20:16.936 "adrfam": "IPv4", 00:20:16.936 "traddr": "10.0.0.2", 00:20:16.936 "trsvcid": "4420" 00:20:16.936 }, 00:20:16.936 "secure_channel": true 00:20:16.936 } 00:20:16.936 } 00:20:16.936 ] 00:20:16.936 } 00:20:16.936 ] 00:20:16.936 }' 00:20:16.936 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:16.936 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:16.936 "subsystems": [ 00:20:16.936 { 00:20:16.936 "subsystem": "keyring", 00:20:16.936 "config": [ 00:20:16.936 { 00:20:16.936 "method": "keyring_file_add_key", 00:20:16.936 "params": { 00:20:16.936 "name": "key0", 00:20:16.936 "path": "/tmp/tmp.tyI0Y1O7iz" 00:20:16.936 } 00:20:16.936 } 00:20:16.936 ] 00:20:16.936 }, 00:20:16.936 { 00:20:16.936 "subsystem": "iobuf", 00:20:16.936 "config": [ 00:20:16.936 { 00:20:16.936 "method": "iobuf_set_options", 00:20:16.936 "params": { 00:20:16.936 "small_pool_count": 8192, 00:20:16.936 "large_pool_count": 1024, 00:20:16.936 "small_bufsize": 8192, 00:20:16.936 "large_bufsize": 135168, 00:20:16.936 "enable_numa": false 00:20:16.936 } 00:20:16.936 } 00:20:16.936 ] 00:20:16.936 }, 00:20:16.936 { 00:20:16.936 "subsystem": "sock", 00:20:16.936 "config": [ 00:20:16.936 { 00:20:16.936 "method": "sock_set_default_impl", 00:20:16.936 "params": { 00:20:16.936 "impl_name": "posix" 00:20:16.936 } 00:20:16.936 }, 00:20:16.936 { 00:20:16.936 "method": "sock_impl_set_options", 00:20:16.936 "params": { 00:20:16.936 "impl_name": "ssl", 00:20:16.936 "recv_buf_size": 4096, 00:20:16.936 "send_buf_size": 4096, 00:20:16.936 "enable_recv_pipe": true, 00:20:16.936 "enable_quickack": false, 00:20:16.936 "enable_placement_id": 0, 00:20:16.936 "enable_zerocopy_send_server": true, 00:20:16.936 "enable_zerocopy_send_client": false, 00:20:16.936 "zerocopy_threshold": 0, 00:20:16.936 "tls_version": 0, 00:20:16.936 "enable_ktls": false 00:20:16.936 } 00:20:16.936 }, 00:20:16.936 { 00:20:16.936 "method": "sock_impl_set_options", 00:20:16.937 "params": { 00:20:16.937 "impl_name": "posix", 00:20:16.937 "recv_buf_size": 2097152, 00:20:16.937 "send_buf_size": 2097152, 00:20:16.937 "enable_recv_pipe": true, 00:20:16.937 "enable_quickack": false, 00:20:16.937 "enable_placement_id": 0, 00:20:16.937 "enable_zerocopy_send_server": true, 00:20:16.937 "enable_zerocopy_send_client": false, 00:20:16.937 "zerocopy_threshold": 0, 00:20:16.937 "tls_version": 0, 00:20:16.937 "enable_ktls": false 00:20:16.937 } 00:20:16.937 } 00:20:16.937 ] 00:20:16.937 }, 00:20:16.937 { 00:20:16.937 "subsystem": "vmd", 00:20:16.937 "config": [] 00:20:16.937 }, 00:20:16.937 { 00:20:16.937 "subsystem": "accel", 00:20:16.937 "config": [ 00:20:16.937 { 00:20:16.937 "method": "accel_set_options", 00:20:16.937 "params": { 00:20:16.937 "small_cache_size": 128, 00:20:16.937 "large_cache_size": 16, 00:20:16.937 "task_count": 2048, 00:20:16.937 "sequence_count": 2048, 00:20:16.937 "buf_count": 2048 00:20:16.937 } 00:20:16.937 } 00:20:16.937 ] 00:20:16.937 }, 00:20:16.937 { 00:20:16.937 "subsystem": "bdev", 00:20:16.937 "config": [ 00:20:16.937 { 00:20:16.937 "method": "bdev_set_options", 00:20:16.937 "params": { 00:20:16.937 "bdev_io_pool_size": 65535, 00:20:16.937 "bdev_io_cache_size": 256, 00:20:16.937 "bdev_auto_examine": true, 00:20:16.937 "iobuf_small_cache_size": 128, 00:20:16.937 "iobuf_large_cache_size": 16 00:20:16.937 } 00:20:16.937 }, 00:20:16.937 { 00:20:16.937 "method": "bdev_raid_set_options", 00:20:16.937 "params": { 00:20:16.937 "process_window_size_kb": 1024, 00:20:16.937 "process_max_bandwidth_mb_sec": 0 00:20:16.937 } 00:20:16.937 }, 00:20:16.937 { 00:20:16.937 "method": "bdev_iscsi_set_options", 00:20:16.937 "params": { 00:20:16.937 "timeout_sec": 30 00:20:16.937 } 00:20:16.937 }, 00:20:16.937 { 00:20:16.937 "method": "bdev_nvme_set_options", 00:20:16.937 "params": { 00:20:16.937 "action_on_timeout": "none", 00:20:16.937 "timeout_us": 0, 00:20:16.937 "timeout_admin_us": 0, 00:20:16.937 "keep_alive_timeout_ms": 10000, 00:20:16.937 "arbitration_burst": 0, 00:20:16.937 "low_priority_weight": 0, 00:20:16.937 "medium_priority_weight": 0, 00:20:16.937 "high_priority_weight": 0, 00:20:16.937 "nvme_adminq_poll_period_us": 10000, 00:20:16.937 "nvme_ioq_poll_period_us": 0, 00:20:16.937 "io_queue_requests": 512, 00:20:16.937 "delay_cmd_submit": true, 00:20:16.937 "transport_retry_count": 4, 00:20:16.937 "bdev_retry_count": 3, 00:20:16.937 "transport_ack_timeout": 0, 00:20:16.937 "ctrlr_loss_timeout_sec": 0, 00:20:16.937 "reconnect_delay_sec": 0, 00:20:16.937 "fast_io_fail_timeout_sec": 0, 00:20:16.937 "disable_auto_failback": false, 00:20:16.937 "generate_uuids": false, 00:20:16.937 "transport_tos": 0, 00:20:16.937 "nvme_error_stat": false, 00:20:16.937 "rdma_srq_size": 0, 00:20:16.937 "io_path_stat": false, 00:20:16.937 "allow_accel_sequence": false, 00:20:16.937 "rdma_max_cq_size": 0, 00:20:16.937 "rdma_cm_event_timeout_ms": 0, 00:20:16.937 "dhchap_digests": [ 00:20:16.937 "sha256", 00:20:16.937 "sha384", 00:20:16.937 "sha512" 00:20:16.937 ], 00:20:16.937 "dhchap_dhgroups": [ 00:20:16.937 "null", 00:20:16.937 "ffdhe2048", 00:20:16.937 "ffdhe3072", 00:20:16.937 "ffdhe4096", 00:20:16.937 "ffdhe6144", 00:20:16.937 "ffdhe8192" 00:20:16.937 ] 00:20:16.937 } 00:20:16.937 }, 00:20:16.937 { 00:20:16.937 "method": "bdev_nvme_attach_controller", 00:20:16.937 "params": { 00:20:16.937 "name": "TLSTEST", 00:20:16.937 "trtype": "TCP", 00:20:16.937 "adrfam": "IPv4", 00:20:16.937 "traddr": "10.0.0.2", 00:20:16.937 "trsvcid": "4420", 00:20:16.937 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.937 "prchk_reftag": false, 00:20:16.937 "prchk_guard": false, 00:20:16.937 "ctrlr_loss_timeout_sec": 0, 00:20:16.937 "reconnect_delay_sec": 0, 00:20:16.937 "fast_io_fail_timeout_sec": 0, 00:20:16.937 "psk": "key0", 00:20:16.937 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:16.937 "hdgst": false, 00:20:16.937 "ddgst": false, 00:20:16.937 "multipath": "multipath" 00:20:16.937 } 00:20:16.937 }, 00:20:16.937 { 00:20:16.937 "method": "bdev_nvme_set_hotplug", 00:20:16.937 "params": { 00:20:16.937 "period_us": 100000, 00:20:16.937 "enable": false 00:20:16.937 } 00:20:16.937 }, 00:20:16.937 { 00:20:16.937 "method": "bdev_wait_for_examine" 00:20:16.937 } 00:20:16.937 ] 00:20:16.937 }, 00:20:16.937 { 00:20:16.937 "subsystem": "nbd", 00:20:16.937 "config": [] 00:20:16.937 } 00:20:16.937 ] 00:20:16.937 }' 00:20:16.937 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2755156 00:20:16.937 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2755156 ']' 00:20:16.937 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2755156 00:20:16.937 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:16.937 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:17.198 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2755156 00:20:17.198 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:17.198 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:17.198 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2755156' 00:20:17.198 killing process with pid 2755156 00:20:17.198 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2755156 00:20:17.198 Received shutdown signal, test time was about 10.000000 seconds 00:20:17.198 00:20:17.198 Latency(us) 00:20:17.198 [2024-11-20T10:21:09.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.198 [2024-11-20T10:21:09.940Z] =================================================================================================================== 00:20:17.198 [2024-11-20T10:21:09.940Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:17.198 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2755156 00:20:17.198 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2754789 00:20:17.198 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2754789 ']' 00:20:17.198 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2754789 00:20:17.198 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:17.198 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:17.198 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2754789 00:20:17.198 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:17.198 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:17.198 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2754789' 00:20:17.198 killing process with pid 2754789 00:20:17.198 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2754789 00:20:17.198 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2754789 00:20:17.459 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:17.459 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:17.459 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:17.459 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.459 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:17.459 "subsystems": [ 00:20:17.459 { 00:20:17.459 "subsystem": "keyring", 00:20:17.459 "config": [ 00:20:17.459 { 00:20:17.459 "method": "keyring_file_add_key", 00:20:17.459 "params": { 00:20:17.459 "name": "key0", 00:20:17.459 "path": "/tmp/tmp.tyI0Y1O7iz" 00:20:17.459 } 00:20:17.459 } 00:20:17.459 ] 00:20:17.459 }, 00:20:17.459 { 00:20:17.459 "subsystem": "iobuf", 00:20:17.459 "config": [ 00:20:17.459 { 00:20:17.459 "method": "iobuf_set_options", 00:20:17.459 "params": { 00:20:17.459 "small_pool_count": 8192, 00:20:17.459 "large_pool_count": 1024, 00:20:17.459 "small_bufsize": 8192, 00:20:17.459 "large_bufsize": 135168, 00:20:17.459 "enable_numa": false 00:20:17.459 } 00:20:17.459 } 00:20:17.459 ] 00:20:17.459 }, 00:20:17.459 { 00:20:17.459 "subsystem": "sock", 00:20:17.459 "config": [ 00:20:17.459 { 00:20:17.459 "method": "sock_set_default_impl", 00:20:17.459 "params": { 00:20:17.459 "impl_name": "posix" 00:20:17.459 } 00:20:17.459 }, 00:20:17.459 { 00:20:17.459 "method": "sock_impl_set_options", 00:20:17.459 "params": { 00:20:17.459 "impl_name": "ssl", 00:20:17.459 "recv_buf_size": 4096, 00:20:17.459 "send_buf_size": 4096, 00:20:17.459 "enable_recv_pipe": true, 00:20:17.459 "enable_quickack": false, 00:20:17.459 "enable_placement_id": 0, 00:20:17.459 "enable_zerocopy_send_server": true, 00:20:17.459 "enable_zerocopy_send_client": false, 00:20:17.459 "zerocopy_threshold": 0, 00:20:17.459 "tls_version": 0, 00:20:17.459 "enable_ktls": false 00:20:17.459 } 00:20:17.459 }, 00:20:17.459 { 00:20:17.459 "method": "sock_impl_set_options", 00:20:17.459 "params": { 00:20:17.459 "impl_name": "posix", 00:20:17.459 "recv_buf_size": 2097152, 00:20:17.459 "send_buf_size": 2097152, 00:20:17.459 "enable_recv_pipe": true, 00:20:17.459 "enable_quickack": false, 00:20:17.459 "enable_placement_id": 0, 00:20:17.459 "enable_zerocopy_send_server": true, 00:20:17.459 "enable_zerocopy_send_client": false, 00:20:17.459 "zerocopy_threshold": 0, 00:20:17.459 "tls_version": 0, 00:20:17.459 "enable_ktls": false 00:20:17.459 } 00:20:17.459 } 00:20:17.459 ] 00:20:17.459 }, 00:20:17.459 { 00:20:17.459 "subsystem": "vmd", 00:20:17.459 "config": [] 00:20:17.459 }, 00:20:17.459 { 00:20:17.459 "subsystem": "accel", 00:20:17.459 "config": [ 00:20:17.459 { 00:20:17.459 "method": "accel_set_options", 00:20:17.459 "params": { 00:20:17.459 "small_cache_size": 128, 00:20:17.459 "large_cache_size": 16, 00:20:17.459 "task_count": 2048, 00:20:17.459 "sequence_count": 2048, 00:20:17.459 "buf_count": 2048 00:20:17.459 } 00:20:17.459 } 00:20:17.459 ] 00:20:17.459 }, 00:20:17.459 { 00:20:17.459 "subsystem": "bdev", 00:20:17.459 "config": [ 00:20:17.459 { 00:20:17.459 "method": "bdev_set_options", 00:20:17.459 "params": { 00:20:17.459 "bdev_io_pool_size": 65535, 00:20:17.459 "bdev_io_cache_size": 256, 00:20:17.459 "bdev_auto_examine": true, 00:20:17.459 "iobuf_small_cache_size": 128, 00:20:17.459 "iobuf_large_cache_size": 16 00:20:17.459 } 00:20:17.459 }, 00:20:17.459 { 00:20:17.459 "method": "bdev_raid_set_options", 00:20:17.459 "params": { 00:20:17.459 "process_window_size_kb": 1024, 00:20:17.459 "process_max_bandwidth_mb_sec": 0 00:20:17.459 } 00:20:17.459 }, 00:20:17.459 { 00:20:17.459 "method": "bdev_iscsi_set_options", 00:20:17.459 "params": { 00:20:17.459 "timeout_sec": 30 00:20:17.459 } 00:20:17.459 }, 00:20:17.459 { 00:20:17.459 "method": "bdev_nvme_set_options", 00:20:17.459 "params": { 00:20:17.459 "action_on_timeout": "none", 00:20:17.459 "timeout_us": 0, 00:20:17.459 "timeout_admin_us": 0, 00:20:17.459 "keep_alive_timeout_ms": 10000, 00:20:17.459 "arbitration_burst": 0, 00:20:17.459 "low_priority_weight": 0, 00:20:17.460 "medium_priority_weight": 0, 00:20:17.460 "high_priority_weight": 0, 00:20:17.460 "nvme_adminq_poll_period_us": 10000, 00:20:17.460 "nvme_ioq_poll_period_us": 0, 00:20:17.460 "io_queue_requests": 0, 00:20:17.460 "delay_cmd_submit": true, 00:20:17.460 "transport_retry_count": 4, 00:20:17.460 "bdev_retry_count": 3, 00:20:17.460 "transport_ack_timeout": 0, 00:20:17.460 "ctrlr_loss_timeout_sec": 0, 00:20:17.460 "reconnect_delay_sec": 0, 00:20:17.460 "fast_io_fail_timeout_sec": 0, 00:20:17.460 "disable_auto_failback": false, 00:20:17.460 "generate_uuids": false, 00:20:17.460 "transport_tos": 0, 00:20:17.460 "nvme_error_stat": false, 00:20:17.460 "rdma_srq_size": 0, 00:20:17.460 "io_path_stat": false, 00:20:17.460 "allow_accel_sequence": false, 00:20:17.460 "rdma_max_cq_size": 0, 00:20:17.460 "rdma_cm_event_timeout_ms": 0, 00:20:17.460 "dhchap_digests": [ 00:20:17.460 "sha256", 00:20:17.460 "sha384", 00:20:17.460 "sha512" 00:20:17.460 ], 00:20:17.460 "dhchap_dhgroups": [ 00:20:17.460 "null", 00:20:17.460 "ffdhe2048", 00:20:17.460 "ffdhe3072", 00:20:17.460 "ffdhe4096", 00:20:17.460 "ffdhe6144", 00:20:17.460 "ffdhe8192" 00:20:17.460 ] 00:20:17.460 } 00:20:17.460 }, 00:20:17.460 { 00:20:17.460 "method": "bdev_nvme_set_hotplug", 00:20:17.460 "params": { 00:20:17.460 "period_us": 100000, 00:20:17.460 "enable": false 00:20:17.460 } 00:20:17.460 }, 00:20:17.460 { 00:20:17.460 "method": "bdev_malloc_create", 00:20:17.460 "params": { 00:20:17.460 "name": "malloc0", 00:20:17.460 "num_blocks": 8192, 00:20:17.460 "block_size": 4096, 00:20:17.460 "physical_block_size": 4096, 00:20:17.460 "uuid": "ddc0a526-6f2d-4ed3-b67c-37dc1daa49d6", 00:20:17.460 "optimal_io_boundary": 0, 00:20:17.460 "md_size": 0, 00:20:17.460 "dif_type": 0, 00:20:17.460 "dif_is_head_of_md": false, 00:20:17.460 "dif_pi_format": 0 00:20:17.460 } 00:20:17.460 }, 00:20:17.460 { 00:20:17.460 "method": "bdev_wait_for_examine" 00:20:17.460 } 00:20:17.460 ] 00:20:17.460 }, 00:20:17.460 { 00:20:17.460 "subsystem": "nbd", 00:20:17.460 "config": [] 00:20:17.460 }, 00:20:17.460 { 00:20:17.460 "subsystem": "scheduler", 00:20:17.460 "config": [ 00:20:17.460 { 00:20:17.460 "method": "framework_set_scheduler", 00:20:17.460 "params": { 00:20:17.460 "name": "static" 00:20:17.460 } 00:20:17.460 } 00:20:17.460 ] 00:20:17.460 }, 00:20:17.460 { 00:20:17.460 "subsystem": "nvmf", 00:20:17.460 "config": [ 00:20:17.460 { 00:20:17.460 "method": "nvmf_set_config", 00:20:17.460 "params": { 00:20:17.460 "discovery_filter": "match_any", 00:20:17.460 "admin_cmd_passthru": { 00:20:17.460 "identify_ctrlr": false 00:20:17.460 }, 00:20:17.460 "dhchap_digests": [ 00:20:17.460 "sha256", 00:20:17.460 "sha384", 00:20:17.460 "sha512" 00:20:17.460 ], 00:20:17.460 "dhchap_dhgroups": [ 00:20:17.460 "null", 00:20:17.460 "ffdhe2048", 00:20:17.460 "ffdhe3072", 00:20:17.460 "ffdhe4096", 00:20:17.460 "ffdhe6144", 00:20:17.460 "ffdhe8192" 00:20:17.460 ] 00:20:17.460 } 00:20:17.460 }, 00:20:17.460 { 00:20:17.460 "method": "nvmf_set_max_subsystems", 00:20:17.460 "params": { 00:20:17.460 "max_subsystems": 1024 00:20:17.460 } 00:20:17.460 }, 00:20:17.460 { 00:20:17.460 "method": "nvmf_set_crdt", 00:20:17.460 "params": { 00:20:17.460 "crdt1": 0, 00:20:17.460 "crdt2": 0, 00:20:17.460 "crdt3": 0 00:20:17.460 } 00:20:17.460 }, 00:20:17.460 { 00:20:17.460 "method": "nvmf_create_transport", 00:20:17.460 "params": { 00:20:17.460 "trtype": "TCP", 00:20:17.460 "max_queue_depth": 128, 00:20:17.460 "max_io_qpairs_per_ctrlr": 127, 00:20:17.460 "in_capsule_data_size": 4096, 00:20:17.460 "max_io_size": 131072, 00:20:17.460 "io_unit_size": 131072, 00:20:17.460 "max_aq_depth": 128, 00:20:17.460 "num_shared_buffers": 511, 00:20:17.460 "buf_cache_size": 4294967295, 00:20:17.460 "dif_insert_or_strip": false, 00:20:17.460 "zcopy": false, 00:20:17.460 "c2h_success": false, 00:20:17.460 "sock_priority": 0, 00:20:17.460 "abort_timeout_sec": 1, 00:20:17.460 "ack_timeout": 0, 00:20:17.460 "data_wr_pool_size": 0 00:20:17.460 } 00:20:17.460 }, 00:20:17.460 { 00:20:17.460 "method": "nvmf_create_subsystem", 00:20:17.460 "params": { 00:20:17.460 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.460 "allow_any_host": false, 00:20:17.460 "serial_number": "SPDK00000000000001", 00:20:17.460 "model_number": "SPDK bdev Controller", 00:20:17.460 "max_namespaces": 10, 00:20:17.460 "min_cntlid": 1, 00:20:17.460 "max_cntlid": 65519, 00:20:17.460 "ana_reporting": false 00:20:17.460 } 00:20:17.460 }, 00:20:17.460 { 00:20:17.460 "method": "nvmf_subsystem_add_host", 00:20:17.460 "params": { 00:20:17.460 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.460 "host": "nqn.2016-06.io.spdk:host1", 00:20:17.460 "psk": "key0" 00:20:17.460 } 00:20:17.460 }, 00:20:17.460 { 00:20:17.460 "method": "nvmf_subsystem_add_ns", 00:20:17.460 "params": { 00:20:17.460 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.460 "namespace": { 00:20:17.460 "nsid": 1, 00:20:17.460 "bdev_name": "malloc0", 00:20:17.460 "nguid": "DDC0A5266F2D4ED3B67C37DC1DAA49D6", 00:20:17.460 "uuid": "ddc0a526-6f2d-4ed3-b67c-37dc1daa49d6", 00:20:17.460 "no_auto_visible": false 00:20:17.460 } 00:20:17.460 } 00:20:17.460 }, 00:20:17.460 { 00:20:17.460 "method": "nvmf_subsystem_add_listener", 00:20:17.460 "params": { 00:20:17.460 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.460 "listen_address": { 00:20:17.460 "trtype": "TCP", 00:20:17.460 "adrfam": "IPv4", 00:20:17.460 "traddr": "10.0.0.2", 00:20:17.460 "trsvcid": "4420" 00:20:17.460 }, 00:20:17.460 "secure_channel": true 00:20:17.460 } 00:20:17.460 } 00:20:17.460 ] 00:20:17.460 } 00:20:17.460 ] 00:20:17.460 }' 00:20:17.460 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2755997 00:20:17.460 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2755997 00:20:17.460 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:17.460 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2755997 ']' 00:20:17.460 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.460 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:17.460 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.460 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:17.460 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.460 [2024-11-20 11:21:10.074573] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:20:17.460 [2024-11-20 11:21:10.074632] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:17.460 [2024-11-20 11:21:10.166231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.460 [2024-11-20 11:21:10.195524] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.460 [2024-11-20 11:21:10.195554] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.460 [2024-11-20 11:21:10.195560] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:17.460 [2024-11-20 11:21:10.195565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:17.460 [2024-11-20 11:21:10.195569] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.460 [2024-11-20 11:21:10.196070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.721 [2024-11-20 11:21:10.389068] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.721 [2024-11-20 11:21:10.421092] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:17.721 [2024-11-20 11:21:10.421285] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:18.293 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:18.293 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:18.293 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:18.293 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:18.293 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.293 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:18.293 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2756307 00:20:18.293 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2756307 /var/tmp/bdevperf.sock 00:20:18.293 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2756307 ']' 00:20:18.293 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:18.293 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:18.293 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:18.293 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:18.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:18.293 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:18.293 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:18.293 "subsystems": [ 00:20:18.293 { 00:20:18.293 "subsystem": "keyring", 00:20:18.293 "config": [ 00:20:18.293 { 00:20:18.293 "method": "keyring_file_add_key", 00:20:18.293 "params": { 00:20:18.293 "name": "key0", 00:20:18.293 "path": "/tmp/tmp.tyI0Y1O7iz" 00:20:18.293 } 00:20:18.293 } 00:20:18.293 ] 00:20:18.293 }, 00:20:18.293 { 00:20:18.293 "subsystem": "iobuf", 00:20:18.293 "config": [ 00:20:18.293 { 00:20:18.293 "method": "iobuf_set_options", 00:20:18.293 "params": { 00:20:18.293 "small_pool_count": 8192, 00:20:18.293 "large_pool_count": 1024, 00:20:18.293 "small_bufsize": 8192, 00:20:18.293 "large_bufsize": 135168, 00:20:18.293 "enable_numa": false 00:20:18.293 } 00:20:18.293 } 00:20:18.293 ] 00:20:18.293 }, 00:20:18.293 { 00:20:18.293 "subsystem": "sock", 00:20:18.293 "config": [ 00:20:18.293 { 00:20:18.293 "method": "sock_set_default_impl", 00:20:18.293 "params": { 00:20:18.293 "impl_name": "posix" 00:20:18.293 } 00:20:18.293 }, 00:20:18.293 { 00:20:18.293 "method": "sock_impl_set_options", 00:20:18.293 "params": { 00:20:18.293 "impl_name": "ssl", 00:20:18.293 "recv_buf_size": 4096, 00:20:18.293 "send_buf_size": 4096, 00:20:18.293 "enable_recv_pipe": true, 00:20:18.293 "enable_quickack": false, 00:20:18.293 "enable_placement_id": 0, 00:20:18.293 "enable_zerocopy_send_server": true, 00:20:18.293 "enable_zerocopy_send_client": false, 00:20:18.293 "zerocopy_threshold": 0, 00:20:18.293 "tls_version": 0, 00:20:18.293 "enable_ktls": false 00:20:18.293 } 00:20:18.293 }, 00:20:18.293 { 00:20:18.293 "method": "sock_impl_set_options", 00:20:18.293 "params": { 00:20:18.293 "impl_name": "posix", 00:20:18.293 "recv_buf_size": 2097152, 00:20:18.293 "send_buf_size": 2097152, 00:20:18.293 "enable_recv_pipe": true, 00:20:18.293 "enable_quickack": false, 00:20:18.293 "enable_placement_id": 0, 00:20:18.293 "enable_zerocopy_send_server": true, 00:20:18.293 "enable_zerocopy_send_client": false, 00:20:18.293 "zerocopy_threshold": 0, 00:20:18.293 "tls_version": 0, 00:20:18.293 "enable_ktls": false 00:20:18.293 } 00:20:18.293 } 00:20:18.293 ] 00:20:18.293 }, 00:20:18.293 { 00:20:18.293 "subsystem": "vmd", 00:20:18.293 "config": [] 00:20:18.293 }, 00:20:18.293 { 00:20:18.293 "subsystem": "accel", 00:20:18.293 "config": [ 00:20:18.293 { 00:20:18.293 "method": "accel_set_options", 00:20:18.293 "params": { 00:20:18.293 "small_cache_size": 128, 00:20:18.293 "large_cache_size": 16, 00:20:18.293 "task_count": 2048, 00:20:18.293 "sequence_count": 2048, 00:20:18.293 "buf_count": 2048 00:20:18.293 } 00:20:18.293 } 00:20:18.293 ] 00:20:18.293 }, 00:20:18.293 { 00:20:18.293 "subsystem": "bdev", 00:20:18.293 "config": [ 00:20:18.293 { 00:20:18.293 "method": "bdev_set_options", 00:20:18.293 "params": { 00:20:18.293 "bdev_io_pool_size": 65535, 00:20:18.293 "bdev_io_cache_size": 256, 00:20:18.293 "bdev_auto_examine": true, 00:20:18.293 "iobuf_small_cache_size": 128, 00:20:18.293 "iobuf_large_cache_size": 16 00:20:18.293 } 00:20:18.293 }, 00:20:18.293 { 00:20:18.293 "method": "bdev_raid_set_options", 00:20:18.293 "params": { 00:20:18.293 "process_window_size_kb": 1024, 00:20:18.293 "process_max_bandwidth_mb_sec": 0 00:20:18.293 } 00:20:18.293 }, 00:20:18.293 { 00:20:18.293 "method": "bdev_iscsi_set_options", 00:20:18.293 "params": { 00:20:18.293 "timeout_sec": 30 00:20:18.293 } 00:20:18.293 }, 00:20:18.293 { 00:20:18.293 "method": "bdev_nvme_set_options", 00:20:18.293 "params": { 00:20:18.293 "action_on_timeout": "none", 00:20:18.293 "timeout_us": 0, 00:20:18.293 "timeout_admin_us": 0, 00:20:18.293 "keep_alive_timeout_ms": 10000, 00:20:18.293 "arbitration_burst": 0, 00:20:18.293 "low_priority_weight": 0, 00:20:18.293 "medium_priority_weight": 0, 00:20:18.293 "high_priority_weight": 0, 00:20:18.293 "nvme_adminq_poll_period_us": 10000, 00:20:18.293 "nvme_ioq_poll_period_us": 0, 00:20:18.293 "io_queue_requests": 512, 00:20:18.293 "delay_cmd_submit": true, 00:20:18.293 "transport_retry_count": 4, 00:20:18.293 "bdev_retry_count": 3, 00:20:18.293 "transport_ack_timeout": 0, 00:20:18.293 "ctrlr_loss_timeout_sec": 0, 00:20:18.293 "reconnect_delay_sec": 0, 00:20:18.293 "fast_io_fail_timeout_sec": 0, 00:20:18.293 "disable_auto_failback": false, 00:20:18.293 "generate_uuids": false, 00:20:18.294 "transport_tos": 0, 00:20:18.294 "nvme_error_stat": false, 00:20:18.294 "rdma_srq_size": 0, 00:20:18.294 "io_path_stat": false, 00:20:18.294 "allow_accel_sequence": false, 00:20:18.294 "rdma_max_cq_size": 0, 00:20:18.294 "rdma_cm_event_timeout_ms": 0 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.294 , 00:20:18.294 "dhchap_digests": [ 00:20:18.294 "sha256", 00:20:18.294 "sha384", 00:20:18.294 "sha512" 00:20:18.294 ], 00:20:18.294 "dhchap_dhgroups": [ 00:20:18.294 "null", 00:20:18.294 "ffdhe2048", 00:20:18.294 "ffdhe3072", 00:20:18.294 "ffdhe4096", 00:20:18.294 "ffdhe6144", 00:20:18.294 "ffdhe8192" 00:20:18.294 ] 00:20:18.294 } 00:20:18.294 }, 00:20:18.294 { 00:20:18.294 "method": "bdev_nvme_attach_controller", 00:20:18.294 "params": { 00:20:18.294 "name": "TLSTEST", 00:20:18.294 "trtype": "TCP", 00:20:18.294 "adrfam": "IPv4", 00:20:18.294 "traddr": "10.0.0.2", 00:20:18.294 "trsvcid": "4420", 00:20:18.294 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.294 "prchk_reftag": false, 00:20:18.294 "prchk_guard": false, 00:20:18.294 "ctrlr_loss_timeout_sec": 0, 00:20:18.294 "reconnect_delay_sec": 0, 00:20:18.294 "fast_io_fail_timeout_sec": 0, 00:20:18.294 "psk": "key0", 00:20:18.294 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:18.294 "hdgst": false, 00:20:18.294 "ddgst": false, 00:20:18.294 "multipath": "multipath" 00:20:18.294 } 00:20:18.294 }, 00:20:18.294 { 00:20:18.294 "method": "bdev_nvme_set_hotplug", 00:20:18.294 "params": { 00:20:18.294 "period_us": 100000, 00:20:18.294 "enable": false 00:20:18.294 } 00:20:18.294 }, 00:20:18.294 { 00:20:18.294 "method": "bdev_wait_for_examine" 00:20:18.294 } 00:20:18.294 ] 00:20:18.294 }, 00:20:18.294 { 00:20:18.294 "subsystem": "nbd", 00:20:18.294 "config": [] 00:20:18.294 } 00:20:18.294 ] 00:20:18.294 }' 00:20:18.294 [2024-11-20 11:21:10.939843] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:20:18.294 [2024-11-20 11:21:10.939897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2756307 ] 00:20:18.294 [2024-11-20 11:21:11.024143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.554 [2024-11-20 11:21:11.053277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:18.554 [2024-11-20 11:21:11.187424] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:19.125 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:19.125 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:19.125 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:19.125 Running I/O for 10 seconds... 00:20:21.456 4680.00 IOPS, 18.28 MiB/s [2024-11-20T10:21:15.140Z] 5129.50 IOPS, 20.04 MiB/s [2024-11-20T10:21:16.082Z] 5308.33 IOPS, 20.74 MiB/s [2024-11-20T10:21:17.133Z] 5403.00 IOPS, 21.11 MiB/s [2024-11-20T10:21:18.075Z] 5344.60 IOPS, 20.88 MiB/s [2024-11-20T10:21:19.017Z] 5428.67 IOPS, 21.21 MiB/s [2024-11-20T10:21:19.959Z] 5437.29 IOPS, 21.24 MiB/s [2024-11-20T10:21:20.900Z] 5448.88 IOPS, 21.28 MiB/s [2024-11-20T10:21:21.841Z] 5511.33 IOPS, 21.53 MiB/s [2024-11-20T10:21:21.841Z] 5528.50 IOPS, 21.60 MiB/s 00:20:29.099 Latency(us) 00:20:29.099 [2024-11-20T10:21:21.841Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.099 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:29.099 Verification LBA range: start 0x0 length 0x2000 00:20:29.099 TLSTESTn1 : 10.02 5530.83 21.60 0.00 0.00 23107.90 4396.37 97430.19 00:20:29.099 [2024-11-20T10:21:21.841Z] =================================================================================================================== 00:20:29.099 [2024-11-20T10:21:21.841Z] Total : 5530.83 21.60 0.00 0.00 23107.90 4396.37 97430.19 00:20:29.099 { 00:20:29.099 "results": [ 00:20:29.099 { 00:20:29.099 "job": "TLSTESTn1", 00:20:29.099 "core_mask": "0x4", 00:20:29.099 "workload": "verify", 00:20:29.099 "status": "finished", 00:20:29.099 "verify_range": { 00:20:29.099 "start": 0, 00:20:29.099 "length": 8192 00:20:29.099 }, 00:20:29.099 "queue_depth": 128, 00:20:29.099 "io_size": 4096, 00:20:29.099 "runtime": 10.018753, 00:20:29.099 "iops": 5530.828038180001, 00:20:29.099 "mibps": 21.60479702414063, 00:20:29.099 "io_failed": 0, 00:20:29.099 "io_timeout": 0, 00:20:29.099 "avg_latency_us": 23107.904956327147, 00:20:29.099 "min_latency_us": 4396.373333333333, 00:20:29.099 "max_latency_us": 97430.18666666666 00:20:29.099 } 00:20:29.099 ], 00:20:29.099 "core_count": 1 00:20:29.099 } 00:20:29.359 11:21:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:29.359 11:21:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2756307 00:20:29.359 11:21:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2756307 ']' 00:20:29.359 11:21:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2756307 00:20:29.359 11:21:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:29.359 11:21:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:29.359 11:21:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2756307 00:20:29.359 11:21:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:29.359 11:21:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:29.359 11:21:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2756307' 00:20:29.359 killing process with pid 2756307 00:20:29.360 11:21:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2756307 00:20:29.360 Received shutdown signal, test time was about 10.000000 seconds 00:20:29.360 00:20:29.360 Latency(us) 00:20:29.360 [2024-11-20T10:21:22.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.360 [2024-11-20T10:21:22.102Z] =================================================================================================================== 00:20:29.360 [2024-11-20T10:21:22.102Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:29.360 11:21:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2756307 00:20:29.360 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2755997 00:20:29.360 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2755997 ']' 00:20:29.360 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2755997 00:20:29.360 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:29.360 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:29.360 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2755997 00:20:29.360 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:29.360 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:29.360 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2755997' 00:20:29.360 killing process with pid 2755997 00:20:29.360 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2755997 00:20:29.360 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2755997 00:20:29.620 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:29.620 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:29.620 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:29.620 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.620 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2758514 00:20:29.620 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2758514 00:20:29.620 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:29.620 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2758514 ']' 00:20:29.620 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.620 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:29.620 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.620 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:29.620 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.620 [2024-11-20 11:21:22.260287] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:20:29.620 [2024-11-20 11:21:22.260348] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.621 [2024-11-20 11:21:22.352481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.881 [2024-11-20 11:21:22.399451] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:29.881 [2024-11-20 11:21:22.399502] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:29.881 [2024-11-20 11:21:22.399510] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:29.881 [2024-11-20 11:21:22.399517] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:29.881 [2024-11-20 11:21:22.399524] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:29.881 [2024-11-20 11:21:22.400277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.452 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:30.452 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:30.452 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:30.452 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:30.452 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.452 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:30.452 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.tyI0Y1O7iz 00:20:30.452 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.tyI0Y1O7iz 00:20:30.452 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:30.713 [2024-11-20 11:21:23.270829] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:30.713 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:30.973 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:30.973 [2024-11-20 11:21:23.667825] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:30.973 [2024-11-20 11:21:23.668156] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:30.974 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:31.234 malloc0 00:20:31.234 11:21:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:31.495 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.tyI0Y1O7iz 00:20:31.756 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:32.016 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2759031 00:20:32.017 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:32.017 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:32.017 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2759031 /var/tmp/bdevperf.sock 00:20:32.017 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2759031 ']' 00:20:32.017 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:32.017 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:32.017 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:32.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:32.017 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:32.017 11:21:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:32.017 [2024-11-20 11:21:24.550931] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:20:32.017 [2024-11-20 11:21:24.551041] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2759031 ] 00:20:32.017 [2024-11-20 11:21:24.640656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.017 [2024-11-20 11:21:24.674396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:32.958 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:32.958 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:32.958 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.tyI0Y1O7iz 00:20:32.958 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:33.218 [2024-11-20 11:21:25.700397] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:33.218 nvme0n1 00:20:33.218 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:33.218 Running I/O for 1 seconds... 00:20:34.418 3562.00 IOPS, 13.91 MiB/s 00:20:34.418 Latency(us) 00:20:34.418 [2024-11-20T10:21:27.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.418 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:34.418 Verification LBA range: start 0x0 length 0x2000 00:20:34.418 nvme0n1 : 1.02 3632.52 14.19 0.00 0.00 35020.54 4505.60 113595.73 00:20:34.418 [2024-11-20T10:21:27.160Z] =================================================================================================================== 00:20:34.418 [2024-11-20T10:21:27.160Z] Total : 3632.52 14.19 0.00 0.00 35020.54 4505.60 113595.73 00:20:34.418 { 00:20:34.418 "results": [ 00:20:34.418 { 00:20:34.418 "job": "nvme0n1", 00:20:34.418 "core_mask": "0x2", 00:20:34.418 "workload": "verify", 00:20:34.418 "status": "finished", 00:20:34.418 "verify_range": { 00:20:34.418 "start": 0, 00:20:34.418 "length": 8192 00:20:34.418 }, 00:20:34.418 "queue_depth": 128, 00:20:34.418 "io_size": 4096, 00:20:34.418 "runtime": 1.015824, 00:20:34.418 "iops": 3632.519019042669, 00:20:34.418 "mibps": 14.189527418135425, 00:20:34.418 "io_failed": 0, 00:20:34.418 "io_timeout": 0, 00:20:34.418 "avg_latency_us": 35020.54469376694, 00:20:34.418 "min_latency_us": 4505.6, 00:20:34.418 "max_latency_us": 113595.73333333334 00:20:34.418 } 00:20:34.418 ], 00:20:34.418 "core_count": 1 00:20:34.418 } 00:20:34.418 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2759031 00:20:34.418 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2759031 ']' 00:20:34.418 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2759031 00:20:34.418 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:34.418 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:34.418 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2759031 00:20:34.418 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:34.418 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:34.418 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2759031' 00:20:34.418 killing process with pid 2759031 00:20:34.418 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2759031 00:20:34.418 Received shutdown signal, test time was about 1.000000 seconds 00:20:34.418 00:20:34.418 Latency(us) 00:20:34.418 [2024-11-20T10:21:27.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.418 [2024-11-20T10:21:27.160Z] =================================================================================================================== 00:20:34.418 [2024-11-20T10:21:27.160Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:34.418 11:21:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2759031 00:20:34.418 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2758514 00:20:34.418 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2758514 ']' 00:20:34.418 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2758514 00:20:34.418 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:34.418 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:34.418 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2758514 00:20:34.678 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:34.678 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:34.678 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2758514' 00:20:34.678 killing process with pid 2758514 00:20:34.678 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2758514 00:20:34.678 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2758514 00:20:34.678 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:34.678 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:34.678 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:34.678 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.678 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2759484 00:20:34.678 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2759484 00:20:34.679 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:34.679 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2759484 ']' 00:20:34.679 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.679 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:34.679 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.679 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:34.679 11:21:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.679 [2024-11-20 11:21:27.358471] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:20:34.679 [2024-11-20 11:21:27.358528] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.949 [2024-11-20 11:21:27.453878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.950 [2024-11-20 11:21:27.496949] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.950 [2024-11-20 11:21:27.496996] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.950 [2024-11-20 11:21:27.497004] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:34.950 [2024-11-20 11:21:27.497011] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:34.950 [2024-11-20 11:21:27.497017] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.950 [2024-11-20 11:21:27.497703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.522 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:35.522 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:35.522 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:35.522 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:35.522 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.522 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.522 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:35.522 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.522 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.522 [2024-11-20 11:21:28.215575] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.522 malloc0 00:20:35.522 [2024-11-20 11:21:28.245741] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:35.522 [2024-11-20 11:21:28.246076] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.783 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.783 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2759738 00:20:35.783 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2759738 /var/tmp/bdevperf.sock 00:20:35.783 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:35.783 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2759738 ']' 00:20:35.783 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.783 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:35.783 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.783 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:35.783 11:21:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.783 [2024-11-20 11:21:28.335662] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:20:35.783 [2024-11-20 11:21:28.335725] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2759738 ] 00:20:35.783 [2024-11-20 11:21:28.421733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.783 [2024-11-20 11:21:28.456162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.724 11:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:36.724 11:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:36.724 11:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.tyI0Y1O7iz 00:20:36.724 11:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:36.724 [2024-11-20 11:21:29.410002] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:36.984 nvme0n1 00:20:36.984 11:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:36.984 Running I/O for 1 seconds... 00:20:37.923 5275.00 IOPS, 20.61 MiB/s 00:20:37.923 Latency(us) 00:20:37.923 [2024-11-20T10:21:30.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.924 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:37.924 Verification LBA range: start 0x0 length 0x2000 00:20:37.924 nvme0n1 : 1.01 5343.13 20.87 0.00 0.00 23814.96 4478.29 30146.56 00:20:37.924 [2024-11-20T10:21:30.666Z] =================================================================================================================== 00:20:37.924 [2024-11-20T10:21:30.666Z] Total : 5343.13 20.87 0.00 0.00 23814.96 4478.29 30146.56 00:20:37.924 { 00:20:37.924 "results": [ 00:20:37.924 { 00:20:37.924 "job": "nvme0n1", 00:20:37.924 "core_mask": "0x2", 00:20:37.924 "workload": "verify", 00:20:37.924 "status": "finished", 00:20:37.924 "verify_range": { 00:20:37.924 "start": 0, 00:20:37.924 "length": 8192 00:20:37.924 }, 00:20:37.924 "queue_depth": 128, 00:20:37.924 "io_size": 4096, 00:20:37.924 "runtime": 1.011205, 00:20:37.924 "iops": 5343.13022581969, 00:20:37.924 "mibps": 20.871602444608165, 00:20:37.924 "io_failed": 0, 00:20:37.924 "io_timeout": 0, 00:20:37.924 "avg_latency_us": 23814.958701955704, 00:20:37.924 "min_latency_us": 4478.293333333333, 00:20:37.924 "max_latency_us": 30146.56 00:20:37.924 } 00:20:37.924 ], 00:20:37.924 "core_count": 1 00:20:37.924 } 00:20:37.924 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:37.924 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.924 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.184 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.184 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:38.184 "subsystems": [ 00:20:38.184 { 00:20:38.184 "subsystem": "keyring", 00:20:38.184 "config": [ 00:20:38.184 { 00:20:38.184 "method": "keyring_file_add_key", 00:20:38.184 "params": { 00:20:38.184 "name": "key0", 00:20:38.184 "path": "/tmp/tmp.tyI0Y1O7iz" 00:20:38.184 } 00:20:38.184 } 00:20:38.184 ] 00:20:38.184 }, 00:20:38.184 { 00:20:38.184 "subsystem": "iobuf", 00:20:38.184 "config": [ 00:20:38.184 { 00:20:38.184 "method": "iobuf_set_options", 00:20:38.184 "params": { 00:20:38.184 "small_pool_count": 8192, 00:20:38.184 "large_pool_count": 1024, 00:20:38.184 "small_bufsize": 8192, 00:20:38.184 "large_bufsize": 135168, 00:20:38.184 "enable_numa": false 00:20:38.184 } 00:20:38.184 } 00:20:38.184 ] 00:20:38.184 }, 00:20:38.184 { 00:20:38.184 "subsystem": "sock", 00:20:38.184 "config": [ 00:20:38.184 { 00:20:38.184 "method": "sock_set_default_impl", 00:20:38.184 "params": { 00:20:38.184 "impl_name": "posix" 00:20:38.184 } 00:20:38.184 }, 00:20:38.184 { 00:20:38.184 "method": "sock_impl_set_options", 00:20:38.184 "params": { 00:20:38.184 "impl_name": "ssl", 00:20:38.184 "recv_buf_size": 4096, 00:20:38.184 "send_buf_size": 4096, 00:20:38.184 "enable_recv_pipe": true, 00:20:38.184 "enable_quickack": false, 00:20:38.184 "enable_placement_id": 0, 00:20:38.184 "enable_zerocopy_send_server": true, 00:20:38.184 "enable_zerocopy_send_client": false, 00:20:38.184 "zerocopy_threshold": 0, 00:20:38.184 "tls_version": 0, 00:20:38.184 "enable_ktls": false 00:20:38.184 } 00:20:38.184 }, 00:20:38.184 { 00:20:38.184 "method": "sock_impl_set_options", 00:20:38.184 "params": { 00:20:38.184 "impl_name": "posix", 00:20:38.184 "recv_buf_size": 2097152, 00:20:38.184 "send_buf_size": 2097152, 00:20:38.184 "enable_recv_pipe": true, 00:20:38.184 "enable_quickack": false, 00:20:38.184 "enable_placement_id": 0, 00:20:38.184 "enable_zerocopy_send_server": true, 00:20:38.184 "enable_zerocopy_send_client": false, 00:20:38.184 "zerocopy_threshold": 0, 00:20:38.184 "tls_version": 0, 00:20:38.184 "enable_ktls": false 00:20:38.184 } 00:20:38.184 } 00:20:38.184 ] 00:20:38.184 }, 00:20:38.184 { 00:20:38.184 "subsystem": "vmd", 00:20:38.184 "config": [] 00:20:38.184 }, 00:20:38.184 { 00:20:38.184 "subsystem": "accel", 00:20:38.184 "config": [ 00:20:38.184 { 00:20:38.184 "method": "accel_set_options", 00:20:38.184 "params": { 00:20:38.184 "small_cache_size": 128, 00:20:38.184 "large_cache_size": 16, 00:20:38.184 "task_count": 2048, 00:20:38.184 "sequence_count": 2048, 00:20:38.184 "buf_count": 2048 00:20:38.184 } 00:20:38.184 } 00:20:38.184 ] 00:20:38.184 }, 00:20:38.184 { 00:20:38.184 "subsystem": "bdev", 00:20:38.184 "config": [ 00:20:38.184 { 00:20:38.184 "method": "bdev_set_options", 00:20:38.184 "params": { 00:20:38.184 "bdev_io_pool_size": 65535, 00:20:38.184 "bdev_io_cache_size": 256, 00:20:38.184 "bdev_auto_examine": true, 00:20:38.184 "iobuf_small_cache_size": 128, 00:20:38.184 "iobuf_large_cache_size": 16 00:20:38.184 } 00:20:38.184 }, 00:20:38.184 { 00:20:38.184 "method": "bdev_raid_set_options", 00:20:38.184 "params": { 00:20:38.184 "process_window_size_kb": 1024, 00:20:38.184 "process_max_bandwidth_mb_sec": 0 00:20:38.184 } 00:20:38.184 }, 00:20:38.184 { 00:20:38.184 "method": "bdev_iscsi_set_options", 00:20:38.184 "params": { 00:20:38.184 "timeout_sec": 30 00:20:38.184 } 00:20:38.184 }, 00:20:38.184 { 00:20:38.184 "method": "bdev_nvme_set_options", 00:20:38.184 "params": { 00:20:38.184 "action_on_timeout": "none", 00:20:38.184 "timeout_us": 0, 00:20:38.184 "timeout_admin_us": 0, 00:20:38.184 "keep_alive_timeout_ms": 10000, 00:20:38.184 "arbitration_burst": 0, 00:20:38.184 "low_priority_weight": 0, 00:20:38.184 "medium_priority_weight": 0, 00:20:38.184 "high_priority_weight": 0, 00:20:38.184 "nvme_adminq_poll_period_us": 10000, 00:20:38.184 "nvme_ioq_poll_period_us": 0, 00:20:38.184 "io_queue_requests": 0, 00:20:38.184 "delay_cmd_submit": true, 00:20:38.184 "transport_retry_count": 4, 00:20:38.184 "bdev_retry_count": 3, 00:20:38.184 "transport_ack_timeout": 0, 00:20:38.184 "ctrlr_loss_timeout_sec": 0, 00:20:38.184 "reconnect_delay_sec": 0, 00:20:38.184 "fast_io_fail_timeout_sec": 0, 00:20:38.184 "disable_auto_failback": false, 00:20:38.184 "generate_uuids": false, 00:20:38.184 "transport_tos": 0, 00:20:38.184 "nvme_error_stat": false, 00:20:38.184 "rdma_srq_size": 0, 00:20:38.184 "io_path_stat": false, 00:20:38.184 "allow_accel_sequence": false, 00:20:38.184 "rdma_max_cq_size": 0, 00:20:38.184 "rdma_cm_event_timeout_ms": 0, 00:20:38.184 "dhchap_digests": [ 00:20:38.184 "sha256", 00:20:38.184 "sha384", 00:20:38.184 "sha512" 00:20:38.184 ], 00:20:38.184 "dhchap_dhgroups": [ 00:20:38.184 "null", 00:20:38.184 "ffdhe2048", 00:20:38.184 "ffdhe3072", 00:20:38.184 "ffdhe4096", 00:20:38.184 "ffdhe6144", 00:20:38.184 "ffdhe8192" 00:20:38.184 ] 00:20:38.184 } 00:20:38.185 }, 00:20:38.185 { 00:20:38.185 "method": "bdev_nvme_set_hotplug", 00:20:38.185 "params": { 00:20:38.185 "period_us": 100000, 00:20:38.185 "enable": false 00:20:38.185 } 00:20:38.185 }, 00:20:38.185 { 00:20:38.185 "method": "bdev_malloc_create", 00:20:38.185 "params": { 00:20:38.185 "name": "malloc0", 00:20:38.185 "num_blocks": 8192, 00:20:38.185 "block_size": 4096, 00:20:38.185 "physical_block_size": 4096, 00:20:38.185 "uuid": "8361e7b8-8bc4-43a8-a016-1fe04b0d1731", 00:20:38.185 "optimal_io_boundary": 0, 00:20:38.185 "md_size": 0, 00:20:38.185 "dif_type": 0, 00:20:38.185 "dif_is_head_of_md": false, 00:20:38.185 "dif_pi_format": 0 00:20:38.185 } 00:20:38.185 }, 00:20:38.185 { 00:20:38.185 "method": "bdev_wait_for_examine" 00:20:38.185 } 00:20:38.185 ] 00:20:38.185 }, 00:20:38.185 { 00:20:38.185 "subsystem": "nbd", 00:20:38.185 "config": [] 00:20:38.185 }, 00:20:38.185 { 00:20:38.185 "subsystem": "scheduler", 00:20:38.185 "config": [ 00:20:38.185 { 00:20:38.185 "method": "framework_set_scheduler", 00:20:38.185 "params": { 00:20:38.185 "name": "static" 00:20:38.185 } 00:20:38.185 } 00:20:38.185 ] 00:20:38.185 }, 00:20:38.185 { 00:20:38.185 "subsystem": "nvmf", 00:20:38.185 "config": [ 00:20:38.185 { 00:20:38.185 "method": "nvmf_set_config", 00:20:38.185 "params": { 00:20:38.185 "discovery_filter": "match_any", 00:20:38.185 "admin_cmd_passthru": { 00:20:38.185 "identify_ctrlr": false 00:20:38.185 }, 00:20:38.185 "dhchap_digests": [ 00:20:38.185 "sha256", 00:20:38.185 "sha384", 00:20:38.185 "sha512" 00:20:38.185 ], 00:20:38.185 "dhchap_dhgroups": [ 00:20:38.185 "null", 00:20:38.185 "ffdhe2048", 00:20:38.185 "ffdhe3072", 00:20:38.185 "ffdhe4096", 00:20:38.185 "ffdhe6144", 00:20:38.185 "ffdhe8192" 00:20:38.185 ] 00:20:38.185 } 00:20:38.185 }, 00:20:38.185 { 00:20:38.185 "method": "nvmf_set_max_subsystems", 00:20:38.185 "params": { 00:20:38.185 "max_subsystems": 1024 00:20:38.185 } 00:20:38.185 }, 00:20:38.185 { 00:20:38.185 "method": "nvmf_set_crdt", 00:20:38.185 "params": { 00:20:38.185 "crdt1": 0, 00:20:38.185 "crdt2": 0, 00:20:38.185 "crdt3": 0 00:20:38.185 } 00:20:38.185 }, 00:20:38.185 { 00:20:38.185 "method": "nvmf_create_transport", 00:20:38.185 "params": { 00:20:38.185 "trtype": "TCP", 00:20:38.185 "max_queue_depth": 128, 00:20:38.185 "max_io_qpairs_per_ctrlr": 127, 00:20:38.185 "in_capsule_data_size": 4096, 00:20:38.185 "max_io_size": 131072, 00:20:38.185 "io_unit_size": 131072, 00:20:38.185 "max_aq_depth": 128, 00:20:38.185 "num_shared_buffers": 511, 00:20:38.185 "buf_cache_size": 4294967295, 00:20:38.185 "dif_insert_or_strip": false, 00:20:38.185 "zcopy": false, 00:20:38.185 "c2h_success": false, 00:20:38.185 "sock_priority": 0, 00:20:38.185 "abort_timeout_sec": 1, 00:20:38.185 "ack_timeout": 0, 00:20:38.185 "data_wr_pool_size": 0 00:20:38.185 } 00:20:38.185 }, 00:20:38.185 { 00:20:38.185 "method": "nvmf_create_subsystem", 00:20:38.185 "params": { 00:20:38.185 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.185 "allow_any_host": false, 00:20:38.185 "serial_number": "00000000000000000000", 00:20:38.185 "model_number": "SPDK bdev Controller", 00:20:38.185 "max_namespaces": 32, 00:20:38.185 "min_cntlid": 1, 00:20:38.185 "max_cntlid": 65519, 00:20:38.185 "ana_reporting": false 00:20:38.185 } 00:20:38.185 }, 00:20:38.185 { 00:20:38.185 "method": "nvmf_subsystem_add_host", 00:20:38.185 "params": { 00:20:38.185 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.185 "host": "nqn.2016-06.io.spdk:host1", 00:20:38.185 "psk": "key0" 00:20:38.185 } 00:20:38.185 }, 00:20:38.185 { 00:20:38.185 "method": "nvmf_subsystem_add_ns", 00:20:38.185 "params": { 00:20:38.185 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.185 "namespace": { 00:20:38.185 "nsid": 1, 00:20:38.185 "bdev_name": "malloc0", 00:20:38.185 "nguid": "8361E7B88BC443A8A0161FE04B0D1731", 00:20:38.185 "uuid": "8361e7b8-8bc4-43a8-a016-1fe04b0d1731", 00:20:38.185 "no_auto_visible": false 00:20:38.185 } 00:20:38.185 } 00:20:38.185 }, 00:20:38.185 { 00:20:38.185 "method": "nvmf_subsystem_add_listener", 00:20:38.185 "params": { 00:20:38.185 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.185 "listen_address": { 00:20:38.185 "trtype": "TCP", 00:20:38.185 "adrfam": "IPv4", 00:20:38.185 "traddr": "10.0.0.2", 00:20:38.185 "trsvcid": "4420" 00:20:38.185 }, 00:20:38.185 "secure_channel": false, 00:20:38.185 "sock_impl": "ssl" 00:20:38.185 } 00:20:38.185 } 00:20:38.185 ] 00:20:38.185 } 00:20:38.185 ] 00:20:38.185 }' 00:20:38.185 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:38.445 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:38.445 "subsystems": [ 00:20:38.445 { 00:20:38.445 "subsystem": "keyring", 00:20:38.445 "config": [ 00:20:38.445 { 00:20:38.445 "method": "keyring_file_add_key", 00:20:38.445 "params": { 00:20:38.445 "name": "key0", 00:20:38.445 "path": "/tmp/tmp.tyI0Y1O7iz" 00:20:38.445 } 00:20:38.445 } 00:20:38.445 ] 00:20:38.445 }, 00:20:38.445 { 00:20:38.445 "subsystem": "iobuf", 00:20:38.445 "config": [ 00:20:38.445 { 00:20:38.445 "method": "iobuf_set_options", 00:20:38.445 "params": { 00:20:38.445 "small_pool_count": 8192, 00:20:38.445 "large_pool_count": 1024, 00:20:38.445 "small_bufsize": 8192, 00:20:38.445 "large_bufsize": 135168, 00:20:38.445 "enable_numa": false 00:20:38.445 } 00:20:38.445 } 00:20:38.445 ] 00:20:38.445 }, 00:20:38.445 { 00:20:38.445 "subsystem": "sock", 00:20:38.445 "config": [ 00:20:38.445 { 00:20:38.445 "method": "sock_set_default_impl", 00:20:38.445 "params": { 00:20:38.445 "impl_name": "posix" 00:20:38.445 } 00:20:38.445 }, 00:20:38.445 { 00:20:38.445 "method": "sock_impl_set_options", 00:20:38.445 "params": { 00:20:38.445 "impl_name": "ssl", 00:20:38.445 "recv_buf_size": 4096, 00:20:38.445 "send_buf_size": 4096, 00:20:38.445 "enable_recv_pipe": true, 00:20:38.445 "enable_quickack": false, 00:20:38.445 "enable_placement_id": 0, 00:20:38.445 "enable_zerocopy_send_server": true, 00:20:38.445 "enable_zerocopy_send_client": false, 00:20:38.445 "zerocopy_threshold": 0, 00:20:38.445 "tls_version": 0, 00:20:38.445 "enable_ktls": false 00:20:38.445 } 00:20:38.445 }, 00:20:38.445 { 00:20:38.445 "method": "sock_impl_set_options", 00:20:38.445 "params": { 00:20:38.445 "impl_name": "posix", 00:20:38.445 "recv_buf_size": 2097152, 00:20:38.445 "send_buf_size": 2097152, 00:20:38.445 "enable_recv_pipe": true, 00:20:38.445 "enable_quickack": false, 00:20:38.445 "enable_placement_id": 0, 00:20:38.445 "enable_zerocopy_send_server": true, 00:20:38.445 "enable_zerocopy_send_client": false, 00:20:38.445 "zerocopy_threshold": 0, 00:20:38.445 "tls_version": 0, 00:20:38.445 "enable_ktls": false 00:20:38.445 } 00:20:38.445 } 00:20:38.445 ] 00:20:38.445 }, 00:20:38.445 { 00:20:38.445 "subsystem": "vmd", 00:20:38.445 "config": [] 00:20:38.445 }, 00:20:38.445 { 00:20:38.445 "subsystem": "accel", 00:20:38.445 "config": [ 00:20:38.445 { 00:20:38.445 "method": "accel_set_options", 00:20:38.445 "params": { 00:20:38.445 "small_cache_size": 128, 00:20:38.445 "large_cache_size": 16, 00:20:38.445 "task_count": 2048, 00:20:38.445 "sequence_count": 2048, 00:20:38.445 "buf_count": 2048 00:20:38.445 } 00:20:38.445 } 00:20:38.445 ] 00:20:38.445 }, 00:20:38.445 { 00:20:38.445 "subsystem": "bdev", 00:20:38.445 "config": [ 00:20:38.445 { 00:20:38.445 "method": "bdev_set_options", 00:20:38.445 "params": { 00:20:38.445 "bdev_io_pool_size": 65535, 00:20:38.445 "bdev_io_cache_size": 256, 00:20:38.445 "bdev_auto_examine": true, 00:20:38.446 "iobuf_small_cache_size": 128, 00:20:38.446 "iobuf_large_cache_size": 16 00:20:38.446 } 00:20:38.446 }, 00:20:38.446 { 00:20:38.446 "method": "bdev_raid_set_options", 00:20:38.446 "params": { 00:20:38.446 "process_window_size_kb": 1024, 00:20:38.446 "process_max_bandwidth_mb_sec": 0 00:20:38.446 } 00:20:38.446 }, 00:20:38.446 { 00:20:38.446 "method": "bdev_iscsi_set_options", 00:20:38.446 "params": { 00:20:38.446 "timeout_sec": 30 00:20:38.446 } 00:20:38.446 }, 00:20:38.446 { 00:20:38.446 "method": "bdev_nvme_set_options", 00:20:38.446 "params": { 00:20:38.446 "action_on_timeout": "none", 00:20:38.446 "timeout_us": 0, 00:20:38.446 "timeout_admin_us": 0, 00:20:38.446 "keep_alive_timeout_ms": 10000, 00:20:38.446 "arbitration_burst": 0, 00:20:38.446 "low_priority_weight": 0, 00:20:38.446 "medium_priority_weight": 0, 00:20:38.446 "high_priority_weight": 0, 00:20:38.446 "nvme_adminq_poll_period_us": 10000, 00:20:38.446 "nvme_ioq_poll_period_us": 0, 00:20:38.446 "io_queue_requests": 512, 00:20:38.446 "delay_cmd_submit": true, 00:20:38.446 "transport_retry_count": 4, 00:20:38.446 "bdev_retry_count": 3, 00:20:38.446 "transport_ack_timeout": 0, 00:20:38.446 "ctrlr_loss_timeout_sec": 0, 00:20:38.446 "reconnect_delay_sec": 0, 00:20:38.446 "fast_io_fail_timeout_sec": 0, 00:20:38.446 "disable_auto_failback": false, 00:20:38.446 "generate_uuids": false, 00:20:38.446 "transport_tos": 0, 00:20:38.446 "nvme_error_stat": false, 00:20:38.446 "rdma_srq_size": 0, 00:20:38.446 "io_path_stat": false, 00:20:38.446 "allow_accel_sequence": false, 00:20:38.446 "rdma_max_cq_size": 0, 00:20:38.446 "rdma_cm_event_timeout_ms": 0, 00:20:38.446 "dhchap_digests": [ 00:20:38.446 "sha256", 00:20:38.446 "sha384", 00:20:38.446 "sha512" 00:20:38.446 ], 00:20:38.446 "dhchap_dhgroups": [ 00:20:38.446 "null", 00:20:38.446 "ffdhe2048", 00:20:38.446 "ffdhe3072", 00:20:38.446 "ffdhe4096", 00:20:38.446 "ffdhe6144", 00:20:38.446 "ffdhe8192" 00:20:38.446 ] 00:20:38.446 } 00:20:38.446 }, 00:20:38.446 { 00:20:38.446 "method": "bdev_nvme_attach_controller", 00:20:38.446 "params": { 00:20:38.446 "name": "nvme0", 00:20:38.446 "trtype": "TCP", 00:20:38.446 "adrfam": "IPv4", 00:20:38.446 "traddr": "10.0.0.2", 00:20:38.446 "trsvcid": "4420", 00:20:38.446 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.446 "prchk_reftag": false, 00:20:38.446 "prchk_guard": false, 00:20:38.446 "ctrlr_loss_timeout_sec": 0, 00:20:38.446 "reconnect_delay_sec": 0, 00:20:38.446 "fast_io_fail_timeout_sec": 0, 00:20:38.446 "psk": "key0", 00:20:38.446 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:38.446 "hdgst": false, 00:20:38.446 "ddgst": false, 00:20:38.446 "multipath": "multipath" 00:20:38.446 } 00:20:38.446 }, 00:20:38.446 { 00:20:38.446 "method": "bdev_nvme_set_hotplug", 00:20:38.446 "params": { 00:20:38.446 "period_us": 100000, 00:20:38.446 "enable": false 00:20:38.446 } 00:20:38.446 }, 00:20:38.446 { 00:20:38.446 "method": "bdev_enable_histogram", 00:20:38.446 "params": { 00:20:38.446 "name": "nvme0n1", 00:20:38.446 "enable": true 00:20:38.446 } 00:20:38.446 }, 00:20:38.446 { 00:20:38.446 "method": "bdev_wait_for_examine" 00:20:38.446 } 00:20:38.446 ] 00:20:38.446 }, 00:20:38.446 { 00:20:38.446 "subsystem": "nbd", 00:20:38.446 "config": [] 00:20:38.446 } 00:20:38.446 ] 00:20:38.446 }' 00:20:38.446 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2759738 00:20:38.446 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2759738 ']' 00:20:38.446 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2759738 00:20:38.446 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:38.446 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:38.446 11:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2759738 00:20:38.446 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:38.446 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:38.446 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2759738' 00:20:38.446 killing process with pid 2759738 00:20:38.446 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2759738 00:20:38.446 Received shutdown signal, test time was about 1.000000 seconds 00:20:38.446 00:20:38.446 Latency(us) 00:20:38.446 [2024-11-20T10:21:31.188Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.446 [2024-11-20T10:21:31.188Z] =================================================================================================================== 00:20:38.446 [2024-11-20T10:21:31.188Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:38.446 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2759738 00:20:38.446 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2759484 00:20:38.446 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2759484 ']' 00:20:38.446 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2759484 00:20:38.446 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:38.446 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:38.446 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2759484 00:20:38.707 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:38.707 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:38.707 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2759484' 00:20:38.707 killing process with pid 2759484 00:20:38.707 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2759484 00:20:38.707 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2759484 00:20:38.707 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:38.707 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:38.707 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:38.707 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:38.707 "subsystems": [ 00:20:38.707 { 00:20:38.707 "subsystem": "keyring", 00:20:38.707 "config": [ 00:20:38.707 { 00:20:38.707 "method": "keyring_file_add_key", 00:20:38.707 "params": { 00:20:38.707 "name": "key0", 00:20:38.707 "path": "/tmp/tmp.tyI0Y1O7iz" 00:20:38.707 } 00:20:38.707 } 00:20:38.707 ] 00:20:38.707 }, 00:20:38.707 { 00:20:38.707 "subsystem": "iobuf", 00:20:38.707 "config": [ 00:20:38.707 { 00:20:38.707 "method": "iobuf_set_options", 00:20:38.707 "params": { 00:20:38.707 "small_pool_count": 8192, 00:20:38.707 "large_pool_count": 1024, 00:20:38.707 "small_bufsize": 8192, 00:20:38.707 "large_bufsize": 135168, 00:20:38.707 "enable_numa": false 00:20:38.707 } 00:20:38.707 } 00:20:38.707 ] 00:20:38.707 }, 00:20:38.707 { 00:20:38.707 "subsystem": "sock", 00:20:38.707 "config": [ 00:20:38.707 { 00:20:38.707 "method": "sock_set_default_impl", 00:20:38.707 "params": { 00:20:38.707 "impl_name": "posix" 00:20:38.707 } 00:20:38.707 }, 00:20:38.707 { 00:20:38.707 "method": "sock_impl_set_options", 00:20:38.707 "params": { 00:20:38.707 "impl_name": "ssl", 00:20:38.707 "recv_buf_size": 4096, 00:20:38.707 "send_buf_size": 4096, 00:20:38.707 "enable_recv_pipe": true, 00:20:38.707 "enable_quickack": false, 00:20:38.707 "enable_placement_id": 0, 00:20:38.707 "enable_zerocopy_send_server": true, 00:20:38.707 "enable_zerocopy_send_client": false, 00:20:38.707 "zerocopy_threshold": 0, 00:20:38.707 "tls_version": 0, 00:20:38.707 "enable_ktls": false 00:20:38.707 } 00:20:38.707 }, 00:20:38.707 { 00:20:38.707 "method": "sock_impl_set_options", 00:20:38.707 "params": { 00:20:38.707 "impl_name": "posix", 00:20:38.707 "recv_buf_size": 2097152, 00:20:38.707 "send_buf_size": 2097152, 00:20:38.707 "enable_recv_pipe": true, 00:20:38.707 "enable_quickack": false, 00:20:38.707 "enable_placement_id": 0, 00:20:38.707 "enable_zerocopy_send_server": true, 00:20:38.707 "enable_zerocopy_send_client": false, 00:20:38.707 "zerocopy_threshold": 0, 00:20:38.707 "tls_version": 0, 00:20:38.707 "enable_ktls": false 00:20:38.707 } 00:20:38.707 } 00:20:38.707 ] 00:20:38.707 }, 00:20:38.707 { 00:20:38.707 "subsystem": "vmd", 00:20:38.707 "config": [] 00:20:38.707 }, 00:20:38.707 { 00:20:38.707 "subsystem": "accel", 00:20:38.707 "config": [ 00:20:38.707 { 00:20:38.707 "method": "accel_set_options", 00:20:38.707 "params": { 00:20:38.707 "small_cache_size": 128, 00:20:38.707 "large_cache_size": 16, 00:20:38.707 "task_count": 2048, 00:20:38.707 "sequence_count": 2048, 00:20:38.707 "buf_count": 2048 00:20:38.707 } 00:20:38.707 } 00:20:38.707 ] 00:20:38.707 }, 00:20:38.707 { 00:20:38.707 "subsystem": "bdev", 00:20:38.707 "config": [ 00:20:38.707 { 00:20:38.707 "method": "bdev_set_options", 00:20:38.707 "params": { 00:20:38.707 "bdev_io_pool_size": 65535, 00:20:38.707 "bdev_io_cache_size": 256, 00:20:38.707 "bdev_auto_examine": true, 00:20:38.707 "iobuf_small_cache_size": 128, 00:20:38.707 "iobuf_large_cache_size": 16 00:20:38.707 } 00:20:38.707 }, 00:20:38.707 { 00:20:38.707 "method": "bdev_raid_set_options", 00:20:38.707 "params": { 00:20:38.707 "process_window_size_kb": 1024, 00:20:38.707 "process_max_bandwidth_mb_sec": 0 00:20:38.707 } 00:20:38.707 }, 00:20:38.707 { 00:20:38.707 "method": "bdev_iscsi_set_options", 00:20:38.707 "params": { 00:20:38.707 "timeout_sec": 30 00:20:38.707 } 00:20:38.707 }, 00:20:38.707 { 00:20:38.707 "method": "bdev_nvme_set_options", 00:20:38.707 "params": { 00:20:38.707 "action_on_timeout": "none", 00:20:38.707 "timeout_us": 0, 00:20:38.707 "timeout_admin_us": 0, 00:20:38.707 "keep_alive_timeout_ms": 10000, 00:20:38.707 "arbitration_burst": 0, 00:20:38.707 "low_priority_weight": 0, 00:20:38.707 "medium_priority_weight": 0, 00:20:38.707 "high_priority_weight": 0, 00:20:38.707 "nvme_adminq_poll_period_us": 10000, 00:20:38.707 "nvme_ioq_poll_period_us": 0, 00:20:38.707 "io_queue_requests": 0, 00:20:38.707 "delay_cmd_submit": true, 00:20:38.707 "transport_retry_count": 4, 00:20:38.707 "bdev_retry_count": 3, 00:20:38.707 "transport_ack_timeout": 0, 00:20:38.707 "ctrlr_loss_timeout_sec": 0, 00:20:38.707 "reconnect_delay_sec": 0, 00:20:38.707 "fast_io_fail_timeout_sec": 0, 00:20:38.707 "disable_auto_failback": false, 00:20:38.707 "generate_uuids": false, 00:20:38.707 "transport_tos": 0, 00:20:38.707 "nvme_error_stat": false, 00:20:38.707 "rdma_srq_size": 0, 00:20:38.707 "io_path_stat": false, 00:20:38.707 "allow_accel_sequence": false, 00:20:38.707 "rdma_max_cq_size": 0, 00:20:38.707 "rdma_cm_event_timeout_ms": 0, 00:20:38.707 "dhchap_digests": [ 00:20:38.707 "sha256", 00:20:38.707 "sha384", 00:20:38.707 "sha512" 00:20:38.707 ], 00:20:38.707 "dhchap_dhgroups": [ 00:20:38.707 "null", 00:20:38.707 "ffdhe2048", 00:20:38.707 "ffdhe3072", 00:20:38.707 "ffdhe4096", 00:20:38.707 "ffdhe6144", 00:20:38.707 "ffdhe8192" 00:20:38.707 ] 00:20:38.707 } 00:20:38.707 }, 00:20:38.707 { 00:20:38.707 "method": "bdev_nvme_set_hotplug", 00:20:38.707 "params": { 00:20:38.707 "period_us": 100000, 00:20:38.707 "enable": false 00:20:38.707 } 00:20:38.707 }, 00:20:38.707 { 00:20:38.707 "method": "bdev_malloc_create", 00:20:38.707 "params": { 00:20:38.707 "name": "malloc0", 00:20:38.707 "num_blocks": 8192, 00:20:38.707 "block_size": 4096, 00:20:38.707 "physical_block_size": 4096, 00:20:38.707 "uuid": "8361e7b8-8bc4-43a8-a016-1fe04b0d1731", 00:20:38.707 "optimal_io_boundary": 0, 00:20:38.707 "md_size": 0, 00:20:38.707 "dif_type": 0, 00:20:38.707 "dif_is_head_of_md": false, 00:20:38.707 "dif_pi_format": 0 00:20:38.707 } 00:20:38.707 }, 00:20:38.707 { 00:20:38.707 "method": "bdev_wait_for_examine" 00:20:38.707 } 00:20:38.707 ] 00:20:38.707 }, 00:20:38.707 { 00:20:38.707 "subsystem": "nbd", 00:20:38.707 "config": [] 00:20:38.707 }, 00:20:38.707 { 00:20:38.707 "subsystem": "scheduler", 00:20:38.707 "config": [ 00:20:38.707 { 00:20:38.707 "method": "framework_set_scheduler", 00:20:38.707 "params": { 00:20:38.707 "name": "static" 00:20:38.707 } 00:20:38.707 } 00:20:38.707 ] 00:20:38.707 }, 00:20:38.707 { 00:20:38.707 "subsystem": "nvmf", 00:20:38.707 "config": [ 00:20:38.707 { 00:20:38.707 "method": "nvmf_set_config", 00:20:38.707 "params": { 00:20:38.707 "discovery_filter": "match_any", 00:20:38.707 "admin_cmd_passthru": { 00:20:38.707 "identify_ctrlr": false 00:20:38.707 }, 00:20:38.707 "dhchap_digests": [ 00:20:38.707 "sha256", 00:20:38.707 "sha384", 00:20:38.707 "sha512" 00:20:38.707 ], 00:20:38.707 "dhchap_dhgroups": [ 00:20:38.707 "null", 00:20:38.707 "ffdhe2048", 00:20:38.707 "ffdhe3072", 00:20:38.707 "ffdhe4096", 00:20:38.707 "ffdhe6144", 00:20:38.707 "ffdhe8192" 00:20:38.707 ] 00:20:38.707 } 00:20:38.707 }, 00:20:38.707 { 00:20:38.707 "method": "nvmf_set_max_subsystems", 00:20:38.707 "params": { 00:20:38.707 "max_subsystems": 1024 00:20:38.707 } 00:20:38.707 }, 00:20:38.707 { 00:20:38.708 "method": "nvmf_set_crdt", 00:20:38.708 "params": { 00:20:38.708 "crdt1": 0, 00:20:38.708 "crdt2": 0, 00:20:38.708 "crdt3": 0 00:20:38.708 } 00:20:38.708 }, 00:20:38.708 { 00:20:38.708 "method": "nvmf_create_transport", 00:20:38.708 "params": { 00:20:38.708 "trtype": "TCP", 00:20:38.708 "max_queue_depth": 128, 00:20:38.708 "max_io_qpairs_per_ctrlr": 127, 00:20:38.708 "in_capsule_data_size": 4096, 00:20:38.708 "max_io_size": 131072, 00:20:38.708 "io_unit_size": 131072, 00:20:38.708 "max_aq_depth": 128, 00:20:38.708 "num_shared_buffers": 511, 00:20:38.708 "buf_cache_size": 4294967295, 00:20:38.708 "dif_insert_or_strip": false, 00:20:38.708 "zcopy": false, 00:20:38.708 "c2h_success": false, 00:20:38.708 "sock_priority": 0, 00:20:38.708 "abort_timeout_sec": 1, 00:20:38.708 "ack_timeout": 0, 00:20:38.708 "data_wr_pool_size": 0 00:20:38.708 } 00:20:38.708 }, 00:20:38.708 { 00:20:38.708 "method": "nvmf_create_subsystem", 00:20:38.708 "params": { 00:20:38.708 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.708 "allow_any_host": false, 00:20:38.708 "serial_number": "00000000000000000000", 00:20:38.708 "model_number": "SPDK bdev Controller", 00:20:38.708 "max_namespaces": 32, 00:20:38.708 "min_cntlid": 1, 00:20:38.708 "max_cntlid": 65519, 00:20:38.708 "ana_reporting": false 00:20:38.708 } 00:20:38.708 }, 00:20:38.708 { 00:20:38.708 "method": "nvmf_subsystem_add_host", 00:20:38.708 "params": { 00:20:38.708 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.708 "host": "nqn.2016-06.io.spdk:host1", 00:20:38.708 "psk": "key0" 00:20:38.708 } 00:20:38.708 }, 00:20:38.708 { 00:20:38.708 "method": "nvmf_subsystem_add_ns", 00:20:38.708 "params": { 00:20:38.708 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.708 "namespace": { 00:20:38.708 "nsid": 1, 00:20:38.708 "bdev_name": "malloc0", 00:20:38.708 "nguid": "8361E7B88BC443A8A0161FE04B0D1731", 00:20:38.708 "uuid": "8361e7b8-8bc4-43a8-a016-1fe04b0d1731", 00:20:38.708 "no_auto_visible": false 00:20:38.708 } 00:20:38.708 } 00:20:38.708 }, 00:20:38.708 { 00:20:38.708 "method": "nvmf_subsystem_add_listener", 00:20:38.708 "params": { 00:20:38.708 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.708 "listen_address": { 00:20:38.708 "trtype": "TCP", 00:20:38.708 "adrfam": "IPv4", 00:20:38.708 "traddr": "10.0.0.2", 00:20:38.708 "trsvcid": "4420" 00:20:38.708 }, 00:20:38.708 "secure_channel": false, 00:20:38.708 "sock_impl": "ssl" 00:20:38.708 } 00:20:38.708 } 00:20:38.708 ] 00:20:38.708 } 00:20:38.708 ] 00:20:38.708 }' 00:20:38.708 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.708 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2760420 00:20:38.708 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:38.708 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2760420 00:20:38.708 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2760420 ']' 00:20:38.708 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.708 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:38.708 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.708 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:38.708 11:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.708 [2024-11-20 11:21:31.409076] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:20:38.708 [2024-11-20 11:21:31.409137] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.968 [2024-11-20 11:21:31.498626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.968 [2024-11-20 11:21:31.529005] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.968 [2024-11-20 11:21:31.529035] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.968 [2024-11-20 11:21:31.529041] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.968 [2024-11-20 11:21:31.529045] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.968 [2024-11-20 11:21:31.529050] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.968 [2024-11-20 11:21:31.529542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.228 [2024-11-20 11:21:31.722545] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.228 [2024-11-20 11:21:31.754573] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:39.228 [2024-11-20 11:21:31.754777] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.489 11:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:39.489 11:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:39.489 11:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:39.489 11:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:39.489 11:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.489 11:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.489 11:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2760451 00:20:39.489 11:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2760451 /var/tmp/bdevperf.sock 00:20:39.489 11:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2760451 ']' 00:20:39.489 11:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:39.489 11:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:39.489 11:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:39.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:39.489 11:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:39.489 11:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:39.489 11:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.489 11:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:39.489 "subsystems": [ 00:20:39.489 { 00:20:39.489 "subsystem": "keyring", 00:20:39.489 "config": [ 00:20:39.489 { 00:20:39.489 "method": "keyring_file_add_key", 00:20:39.489 "params": { 00:20:39.489 "name": "key0", 00:20:39.489 "path": "/tmp/tmp.tyI0Y1O7iz" 00:20:39.489 } 00:20:39.489 } 00:20:39.489 ] 00:20:39.489 }, 00:20:39.489 { 00:20:39.489 "subsystem": "iobuf", 00:20:39.489 "config": [ 00:20:39.489 { 00:20:39.489 "method": "iobuf_set_options", 00:20:39.489 "params": { 00:20:39.489 "small_pool_count": 8192, 00:20:39.489 "large_pool_count": 1024, 00:20:39.489 "small_bufsize": 8192, 00:20:39.489 "large_bufsize": 135168, 00:20:39.489 "enable_numa": false 00:20:39.489 } 00:20:39.489 } 00:20:39.489 ] 00:20:39.489 }, 00:20:39.489 { 00:20:39.489 "subsystem": "sock", 00:20:39.489 "config": [ 00:20:39.489 { 00:20:39.489 "method": "sock_set_default_impl", 00:20:39.489 "params": { 00:20:39.489 "impl_name": "posix" 00:20:39.489 } 00:20:39.489 }, 00:20:39.489 { 00:20:39.489 "method": "sock_impl_set_options", 00:20:39.489 "params": { 00:20:39.489 "impl_name": "ssl", 00:20:39.489 "recv_buf_size": 4096, 00:20:39.489 "send_buf_size": 4096, 00:20:39.489 "enable_recv_pipe": true, 00:20:39.489 "enable_quickack": false, 00:20:39.489 "enable_placement_id": 0, 00:20:39.489 "enable_zerocopy_send_server": true, 00:20:39.489 "enable_zerocopy_send_client": false, 00:20:39.489 "zerocopy_threshold": 0, 00:20:39.489 "tls_version": 0, 00:20:39.489 "enable_ktls": false 00:20:39.489 } 00:20:39.489 }, 00:20:39.489 { 00:20:39.489 "method": "sock_impl_set_options", 00:20:39.489 "params": { 00:20:39.489 "impl_name": "posix", 00:20:39.489 "recv_buf_size": 2097152, 00:20:39.489 "send_buf_size": 2097152, 00:20:39.489 "enable_recv_pipe": true, 00:20:39.489 "enable_quickack": false, 00:20:39.489 "enable_placement_id": 0, 00:20:39.489 "enable_zerocopy_send_server": true, 00:20:39.489 "enable_zerocopy_send_client": false, 00:20:39.489 "zerocopy_threshold": 0, 00:20:39.489 "tls_version": 0, 00:20:39.489 "enable_ktls": false 00:20:39.489 } 00:20:39.489 } 00:20:39.489 ] 00:20:39.489 }, 00:20:39.489 { 00:20:39.489 "subsystem": "vmd", 00:20:39.489 "config": [] 00:20:39.489 }, 00:20:39.489 { 00:20:39.489 "subsystem": "accel", 00:20:39.489 "config": [ 00:20:39.489 { 00:20:39.489 "method": "accel_set_options", 00:20:39.489 "params": { 00:20:39.489 "small_cache_size": 128, 00:20:39.489 "large_cache_size": 16, 00:20:39.489 "task_count": 2048, 00:20:39.489 "sequence_count": 2048, 00:20:39.489 "buf_count": 2048 00:20:39.489 } 00:20:39.489 } 00:20:39.489 ] 00:20:39.489 }, 00:20:39.489 { 00:20:39.489 "subsystem": "bdev", 00:20:39.489 "config": [ 00:20:39.489 { 00:20:39.489 "method": "bdev_set_options", 00:20:39.489 "params": { 00:20:39.489 "bdev_io_pool_size": 65535, 00:20:39.489 "bdev_io_cache_size": 256, 00:20:39.489 "bdev_auto_examine": true, 00:20:39.489 "iobuf_small_cache_size": 128, 00:20:39.489 "iobuf_large_cache_size": 16 00:20:39.489 } 00:20:39.489 }, 00:20:39.489 { 00:20:39.489 "method": "bdev_raid_set_options", 00:20:39.489 "params": { 00:20:39.489 "process_window_size_kb": 1024, 00:20:39.489 "process_max_bandwidth_mb_sec": 0 00:20:39.489 } 00:20:39.489 }, 00:20:39.489 { 00:20:39.489 "method": "bdev_iscsi_set_options", 00:20:39.489 "params": { 00:20:39.489 "timeout_sec": 30 00:20:39.489 } 00:20:39.489 }, 00:20:39.489 { 00:20:39.489 "method": "bdev_nvme_set_options", 00:20:39.489 "params": { 00:20:39.489 "action_on_timeout": "none", 00:20:39.489 "timeout_us": 0, 00:20:39.489 "timeout_admin_us": 0, 00:20:39.489 "keep_alive_timeout_ms": 10000, 00:20:39.489 "arbitration_burst": 0, 00:20:39.489 "low_priority_weight": 0, 00:20:39.489 "medium_priority_weight": 0, 00:20:39.489 "high_priority_weight": 0, 00:20:39.489 "nvme_adminq_poll_period_us": 10000, 00:20:39.489 "nvme_ioq_poll_period_us": 0, 00:20:39.489 "io_queue_requests": 512, 00:20:39.489 "delay_cmd_submit": true, 00:20:39.489 "transport_retry_count": 4, 00:20:39.489 "bdev_retry_count": 3, 00:20:39.489 "transport_ack_timeout": 0, 00:20:39.489 "ctrlr_loss_timeout_sec": 0, 00:20:39.489 "reconnect_delay_sec": 0, 00:20:39.489 "fast_io_fail_timeout_sec": 0, 00:20:39.489 "disable_auto_failback": false, 00:20:39.489 "generate_uuids": false, 00:20:39.489 "transport_tos": 0, 00:20:39.489 "nvme_error_stat": false, 00:20:39.489 "rdma_srq_size": 0, 00:20:39.489 "io_path_stat": false, 00:20:39.489 "allow_accel_sequence": false, 00:20:39.489 "rdma_max_cq_size": 0, 00:20:39.489 "rdma_cm_event_timeout_ms": 0, 00:20:39.489 "dhchap_digests": [ 00:20:39.489 "sha256", 00:20:39.489 "sha384", 00:20:39.489 "sha512" 00:20:39.489 ], 00:20:39.489 "dhchap_dhgroups": [ 00:20:39.489 "null", 00:20:39.489 "ffdhe2048", 00:20:39.489 "ffdhe3072", 00:20:39.489 "ffdhe4096", 00:20:39.489 "ffdhe6144", 00:20:39.489 "ffdhe8192" 00:20:39.489 ] 00:20:39.489 } 00:20:39.489 }, 00:20:39.489 { 00:20:39.489 "method": "bdev_nvme_attach_controller", 00:20:39.489 "params": { 00:20:39.489 "name": "nvme0", 00:20:39.489 "trtype": "TCP", 00:20:39.489 "adrfam": "IPv4", 00:20:39.489 "traddr": "10.0.0.2", 00:20:39.489 "trsvcid": "4420", 00:20:39.489 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.489 "prchk_reftag": false, 00:20:39.489 "prchk_guard": false, 00:20:39.489 "ctrlr_loss_timeout_sec": 0, 00:20:39.489 "reconnect_delay_sec": 0, 00:20:39.489 "fast_io_fail_timeout_sec": 0, 00:20:39.489 "psk": "key0", 00:20:39.489 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:39.489 "hdgst": false, 00:20:39.490 "ddgst": false, 00:20:39.490 "multipath": "multipath" 00:20:39.490 } 00:20:39.490 }, 00:20:39.490 { 00:20:39.490 "method": "bdev_nvme_set_hotplug", 00:20:39.490 "params": { 00:20:39.490 "period_us": 100000, 00:20:39.490 "enable": false 00:20:39.490 } 00:20:39.490 }, 00:20:39.490 { 00:20:39.490 "method": "bdev_enable_histogram", 00:20:39.490 "params": { 00:20:39.490 "name": "nvme0n1", 00:20:39.490 "enable": true 00:20:39.490 } 00:20:39.490 }, 00:20:39.490 { 00:20:39.490 "method": "bdev_wait_for_examine" 00:20:39.490 } 00:20:39.490 ] 00:20:39.490 }, 00:20:39.490 { 00:20:39.490 "subsystem": "nbd", 00:20:39.490 "config": [] 00:20:39.490 } 00:20:39.490 ] 00:20:39.490 }' 00:20:39.750 [2024-11-20 11:21:32.279342] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:20:39.750 [2024-11-20 11:21:32.279409] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2760451 ] 00:20:39.750 [2024-11-20 11:21:32.366298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.750 [2024-11-20 11:21:32.396236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.011 [2024-11-20 11:21:32.531128] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:40.580 11:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:40.580 11:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:40.580 11:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:40.580 11:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:40.580 11:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.580 11:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:40.840 Running I/O for 1 seconds... 00:20:41.779 4306.00 IOPS, 16.82 MiB/s 00:20:41.779 Latency(us) 00:20:41.779 [2024-11-20T10:21:34.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.779 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:41.779 Verification LBA range: start 0x0 length 0x2000 00:20:41.779 nvme0n1 : 1.01 4375.81 17.09 0.00 0.00 29073.43 4805.97 21845.33 00:20:41.779 [2024-11-20T10:21:34.521Z] =================================================================================================================== 00:20:41.779 [2024-11-20T10:21:34.521Z] Total : 4375.81 17.09 0.00 0.00 29073.43 4805.97 21845.33 00:20:41.779 { 00:20:41.779 "results": [ 00:20:41.779 { 00:20:41.779 "job": "nvme0n1", 00:20:41.779 "core_mask": "0x2", 00:20:41.779 "workload": "verify", 00:20:41.779 "status": "finished", 00:20:41.779 "verify_range": { 00:20:41.779 "start": 0, 00:20:41.779 "length": 8192 00:20:41.779 }, 00:20:41.779 "queue_depth": 128, 00:20:41.779 "io_size": 4096, 00:20:41.779 "runtime": 1.013527, 00:20:41.779 "iops": 4375.808439242369, 00:20:41.779 "mibps": 17.093001715790503, 00:20:41.779 "io_failed": 0, 00:20:41.779 "io_timeout": 0, 00:20:41.779 "avg_latency_us": 29073.42647125141, 00:20:41.779 "min_latency_us": 4805.973333333333, 00:20:41.779 "max_latency_us": 21845.333333333332 00:20:41.779 } 00:20:41.779 ], 00:20:41.779 "core_count": 1 00:20:41.779 } 00:20:41.779 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:41.779 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:41.779 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:41.779 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:20:41.779 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:20:41.779 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:41.779 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:41.779 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:41.779 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:41.780 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:41.780 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:41.780 nvmf_trace.0 00:20:41.780 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:20:41.780 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2760451 00:20:41.780 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2760451 ']' 00:20:41.780 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2760451 00:20:41.780 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:41.780 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:41.780 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2760451 00:20:42.040 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:42.040 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:42.040 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2760451' 00:20:42.040 killing process with pid 2760451 00:20:42.040 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2760451 00:20:42.040 Received shutdown signal, test time was about 1.000000 seconds 00:20:42.040 00:20:42.040 Latency(us) 00:20:42.040 [2024-11-20T10:21:34.782Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.040 [2024-11-20T10:21:34.782Z] =================================================================================================================== 00:20:42.040 [2024-11-20T10:21:34.782Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:42.040 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2760451 00:20:42.040 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:42.040 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:42.040 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:42.040 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:42.040 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:42.040 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:42.040 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:42.040 rmmod nvme_tcp 00:20:42.040 rmmod nvme_fabrics 00:20:42.040 rmmod nvme_keyring 00:20:42.040 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:42.040 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:42.040 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:42.040 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2760420 ']' 00:20:42.040 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2760420 00:20:42.040 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2760420 ']' 00:20:42.040 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2760420 00:20:42.040 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:42.040 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:42.040 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2760420 00:20:42.301 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:42.301 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:42.301 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2760420' 00:20:42.301 killing process with pid 2760420 00:20:42.301 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2760420 00:20:42.301 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2760420 00:20:42.301 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:42.301 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:42.301 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:42.301 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:42.301 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:42.301 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:42.301 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:42.301 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:42.301 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:42.301 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.301 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:42.301 11:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.846 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:44.846 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.MATpjaNoxZ /tmp/tmp.uxAoTssTxJ /tmp/tmp.tyI0Y1O7iz 00:20:44.846 00:20:44.846 real 1m28.406s 00:20:44.846 user 2m19.890s 00:20:44.846 sys 0m27.319s 00:20:44.846 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:44.846 11:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:44.846 ************************************ 00:20:44.846 END TEST nvmf_tls 00:20:44.846 ************************************ 00:20:44.846 11:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:44.846 11:21:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:44.846 11:21:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:44.846 11:21:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:44.846 ************************************ 00:20:44.846 START TEST nvmf_fips 00:20:44.846 ************************************ 00:20:44.846 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:44.846 * Looking for test storage... 00:20:44.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:44.846 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:44.846 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:20:44.846 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:44.846 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:44.846 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:44.846 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:44.846 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:44.846 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:44.846 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:44.846 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:44.846 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:44.846 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:44.846 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:44.846 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:44.846 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:44.846 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:44.846 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:44.846 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:44.846 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:44.846 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:44.846 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:44.846 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:44.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.847 --rc genhtml_branch_coverage=1 00:20:44.847 --rc genhtml_function_coverage=1 00:20:44.847 --rc genhtml_legend=1 00:20:44.847 --rc geninfo_all_blocks=1 00:20:44.847 --rc geninfo_unexecuted_blocks=1 00:20:44.847 00:20:44.847 ' 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:44.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.847 --rc genhtml_branch_coverage=1 00:20:44.847 --rc genhtml_function_coverage=1 00:20:44.847 --rc genhtml_legend=1 00:20:44.847 --rc geninfo_all_blocks=1 00:20:44.847 --rc geninfo_unexecuted_blocks=1 00:20:44.847 00:20:44.847 ' 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:44.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.847 --rc genhtml_branch_coverage=1 00:20:44.847 --rc genhtml_function_coverage=1 00:20:44.847 --rc genhtml_legend=1 00:20:44.847 --rc geninfo_all_blocks=1 00:20:44.847 --rc geninfo_unexecuted_blocks=1 00:20:44.847 00:20:44.847 ' 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:44.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.847 --rc genhtml_branch_coverage=1 00:20:44.847 --rc genhtml_function_coverage=1 00:20:44.847 --rc genhtml_legend=1 00:20:44.847 --rc geninfo_all_blocks=1 00:20:44.847 --rc geninfo_unexecuted_blocks=1 00:20:44.847 00:20:44.847 ' 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:44.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:44.847 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:20:44.848 Error setting digest 00:20:44.848 40F26472B77F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:44.848 40F26472B77F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:44.848 11:21:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:52.990 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:52.990 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.990 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:52.991 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:52.991 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:52.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:52.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.495 ms 00:20:52.991 00:20:52.991 --- 10.0.0.2 ping statistics --- 00:20:52.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.991 rtt min/avg/max/mdev = 0.495/0.495/0.495/0.000 ms 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:52.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:52.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:20:52.991 00:20:52.991 --- 10.0.0.1 ping statistics --- 00:20:52.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.991 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2765215 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2765215 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2765215 ']' 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:52.991 11:21:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:52.991 [2024-11-20 11:21:44.974600] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:20:52.991 [2024-11-20 11:21:44.974677] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:52.991 [2024-11-20 11:21:45.073941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.991 [2024-11-20 11:21:45.124478] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:52.991 [2024-11-20 11:21:45.124530] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:52.991 [2024-11-20 11:21:45.124539] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:52.991 [2024-11-20 11:21:45.124546] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:52.991 [2024-11-20 11:21:45.124553] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:52.991 [2024-11-20 11:21:45.125373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.253 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:53.253 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:53.253 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:53.253 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:53.253 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:53.253 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:53.253 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:53.253 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:53.253 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:53.253 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.PLp 00:20:53.253 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:53.253 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.PLp 00:20:53.253 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.PLp 00:20:53.253 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.PLp 00:20:53.253 11:21:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:53.253 [2024-11-20 11:21:45.987534] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.514 [2024-11-20 11:21:46.003522] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:53.514 [2024-11-20 11:21:46.003767] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.514 malloc0 00:20:53.514 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:53.514 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2765517 00:20:53.514 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2765517 /var/tmp/bdevperf.sock 00:20:53.514 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:53.514 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2765517 ']' 00:20:53.514 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:53.514 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:53.514 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:53.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:53.514 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:53.514 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:53.514 [2024-11-20 11:21:46.158285] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:20:53.514 [2024-11-20 11:21:46.158364] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2765517 ] 00:20:53.514 [2024-11-20 11:21:46.251125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.775 [2024-11-20 11:21:46.302093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:54.348 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:54.348 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:54.348 11:21:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.PLp 00:20:54.609 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:54.609 [2024-11-20 11:21:47.315404] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:54.870 TLSTESTn1 00:20:54.870 11:21:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:54.870 Running I/O for 10 seconds... 00:20:57.193 5676.00 IOPS, 22.17 MiB/s [2024-11-20T10:21:50.875Z] 5653.50 IOPS, 22.08 MiB/s [2024-11-20T10:21:51.816Z] 5172.67 IOPS, 20.21 MiB/s [2024-11-20T10:21:52.757Z] 5308.50 IOPS, 20.74 MiB/s [2024-11-20T10:21:53.700Z] 5352.60 IOPS, 20.91 MiB/s [2024-11-20T10:21:54.642Z] 5277.33 IOPS, 20.61 MiB/s [2024-11-20T10:21:55.584Z] 5266.86 IOPS, 20.57 MiB/s [2024-11-20T10:21:56.969Z] 5364.12 IOPS, 20.95 MiB/s [2024-11-20T10:21:57.541Z] 5430.22 IOPS, 21.21 MiB/s [2024-11-20T10:21:57.802Z] 5389.70 IOPS, 21.05 MiB/s 00:21:05.060 Latency(us) 00:21:05.060 [2024-11-20T10:21:57.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.060 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:05.060 Verification LBA range: start 0x0 length 0x2000 00:21:05.060 TLSTESTn1 : 10.01 5396.08 21.08 0.00 0.00 23687.94 4287.15 23920.64 00:21:05.060 [2024-11-20T10:21:57.802Z] =================================================================================================================== 00:21:05.060 [2024-11-20T10:21:57.802Z] Total : 5396.08 21.08 0.00 0.00 23687.94 4287.15 23920.64 00:21:05.060 { 00:21:05.060 "results": [ 00:21:05.060 { 00:21:05.060 "job": "TLSTESTn1", 00:21:05.060 "core_mask": "0x4", 00:21:05.060 "workload": "verify", 00:21:05.060 "status": "finished", 00:21:05.060 "verify_range": { 00:21:05.060 "start": 0, 00:21:05.060 "length": 8192 00:21:05.060 }, 00:21:05.060 "queue_depth": 128, 00:21:05.060 "io_size": 4096, 00:21:05.060 "runtime": 10.011341, 00:21:05.060 "iops": 5396.0803053257305, 00:21:05.060 "mibps": 21.078438692678635, 00:21:05.060 "io_failed": 0, 00:21:05.060 "io_timeout": 0, 00:21:05.060 "avg_latency_us": 23687.93662236373, 00:21:05.061 "min_latency_us": 4287.1466666666665, 00:21:05.061 "max_latency_us": 23920.64 00:21:05.061 } 00:21:05.061 ], 00:21:05.061 "core_count": 1 00:21:05.061 } 00:21:05.061 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:05.061 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:05.061 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:21:05.061 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:21:05.061 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:05.061 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:05.061 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:05.061 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:05.061 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:05.061 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:05.061 nvmf_trace.0 00:21:05.061 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:21:05.061 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2765517 00:21:05.061 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2765517 ']' 00:21:05.061 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2765517 00:21:05.061 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:05.061 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:05.061 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2765517 00:21:05.061 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:05.061 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:05.061 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2765517' 00:21:05.061 killing process with pid 2765517 00:21:05.061 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2765517 00:21:05.061 Received shutdown signal, test time was about 10.000000 seconds 00:21:05.061 00:21:05.061 Latency(us) 00:21:05.061 [2024-11-20T10:21:57.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.061 [2024-11-20T10:21:57.803Z] =================================================================================================================== 00:21:05.061 [2024-11-20T10:21:57.803Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:05.061 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2765517 00:21:05.321 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:05.321 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:05.321 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:21:05.321 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:05.321 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:21:05.321 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:05.321 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:05.321 rmmod nvme_tcp 00:21:05.321 rmmod nvme_fabrics 00:21:05.321 rmmod nvme_keyring 00:21:05.321 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:05.321 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:21:05.321 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:21:05.321 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2765215 ']' 00:21:05.322 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2765215 00:21:05.322 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2765215 ']' 00:21:05.322 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2765215 00:21:05.322 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:05.322 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:05.322 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2765215 00:21:05.322 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:05.322 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:05.322 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2765215' 00:21:05.322 killing process with pid 2765215 00:21:05.322 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2765215 00:21:05.322 11:21:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2765215 00:21:05.583 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:05.583 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:05.583 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:05.583 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:21:05.583 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:05.583 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:21:05.583 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:21:05.583 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:05.583 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:05.583 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.583 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:05.583 11:21:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.497 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:07.497 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.PLp 00:21:07.497 00:21:07.497 real 0m23.131s 00:21:07.497 user 0m24.935s 00:21:07.497 sys 0m9.565s 00:21:07.497 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:07.497 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:07.497 ************************************ 00:21:07.497 END TEST nvmf_fips 00:21:07.497 ************************************ 00:21:07.497 11:22:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:07.497 11:22:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:07.497 11:22:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:07.497 11:22:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:07.759 ************************************ 00:21:07.759 START TEST nvmf_control_msg_list 00:21:07.759 ************************************ 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:07.759 * Looking for test storage... 00:21:07.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:07.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.759 --rc genhtml_branch_coverage=1 00:21:07.759 --rc genhtml_function_coverage=1 00:21:07.759 --rc genhtml_legend=1 00:21:07.759 --rc geninfo_all_blocks=1 00:21:07.759 --rc geninfo_unexecuted_blocks=1 00:21:07.759 00:21:07.759 ' 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:07.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.759 --rc genhtml_branch_coverage=1 00:21:07.759 --rc genhtml_function_coverage=1 00:21:07.759 --rc genhtml_legend=1 00:21:07.759 --rc geninfo_all_blocks=1 00:21:07.759 --rc geninfo_unexecuted_blocks=1 00:21:07.759 00:21:07.759 ' 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:07.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.759 --rc genhtml_branch_coverage=1 00:21:07.759 --rc genhtml_function_coverage=1 00:21:07.759 --rc genhtml_legend=1 00:21:07.759 --rc geninfo_all_blocks=1 00:21:07.759 --rc geninfo_unexecuted_blocks=1 00:21:07.759 00:21:07.759 ' 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:07.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.759 --rc genhtml_branch_coverage=1 00:21:07.759 --rc genhtml_function_coverage=1 00:21:07.759 --rc genhtml_legend=1 00:21:07.759 --rc geninfo_all_blocks=1 00:21:07.759 --rc geninfo_unexecuted_blocks=1 00:21:07.759 00:21:07.759 ' 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.759 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:21:07.760 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.760 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:21:07.760 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:07.760 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:07.760 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:07.760 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:07.760 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:07.760 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:07.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:07.760 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:07.760 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:07.760 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:07.760 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:21:08.021 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:08.021 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.021 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:08.021 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:08.021 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:08.021 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.021 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:08.021 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.021 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:08.021 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:08.021 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:21:08.021 11:22:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:16.161 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:16.161 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:16.161 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:16.161 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:16.161 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:16.161 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:16.161 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:16.161 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:16.162 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:16.162 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:16.162 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:16.162 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:16.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:16.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:21:16.162 00:21:16.162 --- 10.0.0.2 ping statistics --- 00:21:16.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.162 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:16.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:16.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:21:16.162 00:21:16.162 --- 10.0.0.1 ping statistics --- 00:21:16.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.162 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:21:16.162 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:16.163 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:16.163 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:16.163 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:16.163 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:16.163 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:16.163 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:16.163 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:16.163 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:16.163 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:16.163 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:16.163 11:22:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2771947 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2771947 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2771947 ']' 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:16.163 [2024-11-20 11:22:08.058893] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:21:16.163 [2024-11-20 11:22:08.058960] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.163 [2024-11-20 11:22:08.131560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.163 [2024-11-20 11:22:08.178220] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.163 [2024-11-20 11:22:08.178270] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.163 [2024-11-20 11:22:08.178277] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:16.163 [2024-11-20 11:22:08.178282] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:16.163 [2024-11-20 11:22:08.178287] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.163 [2024-11-20 11:22:08.178959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:16.163 [2024-11-20 11:22:08.338910] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:16.163 Malloc0 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:16.163 [2024-11-20 11:22:08.392659] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2772134 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2772136 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2772138 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2772134 00:21:16.163 11:22:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:16.163 [2024-11-20 11:22:08.493456] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:16.163 [2024-11-20 11:22:08.493846] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:16.163 [2024-11-20 11:22:08.494106] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:17.194 Initializing NVMe Controllers 00:21:17.194 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:17.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:17.194 Initialization complete. Launching workers. 00:21:17.194 ======================================================== 00:21:17.194 Latency(us) 00:21:17.194 Device Information : IOPS MiB/s Average min max 00:21:17.194 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1565.00 6.11 638.88 232.65 835.98 00:21:17.194 ======================================================== 00:21:17.194 Total : 1565.00 6.11 638.88 232.65 835.98 00:21:17.194 00:21:17.194 Initializing NVMe Controllers 00:21:17.194 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:17.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:17.194 Initialization complete. Launching workers. 00:21:17.194 ======================================================== 00:21:17.194 Latency(us) 00:21:17.194 Device Information : IOPS MiB/s Average min max 00:21:17.194 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40904.20 40809.77 40985.45 00:21:17.194 ======================================================== 00:21:17.194 Total : 25.00 0.10 40904.20 40809.77 40985.45 00:21:17.194 00:21:17.194 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2772136 00:21:17.194 Initializing NVMe Controllers 00:21:17.194 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:17.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:17.194 Initialization complete. Launching workers. 00:21:17.194 ======================================================== 00:21:17.194 Latency(us) 00:21:17.194 Device Information : IOPS MiB/s Average min max 00:21:17.194 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 1595.00 6.23 626.92 148.21 824.08 00:21:17.194 ======================================================== 00:21:17.194 Total : 1595.00 6.23 626.92 148.21 824.08 00:21:17.194 00:21:17.194 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2772138 00:21:17.194 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:17.194 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:17.194 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:17.194 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:17.194 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:17.194 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:17.194 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:17.194 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:17.194 rmmod nvme_tcp 00:21:17.194 rmmod nvme_fabrics 00:21:17.194 rmmod nvme_keyring 00:21:17.194 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:17.194 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:17.194 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:17.194 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2771947 ']' 00:21:17.194 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2771947 00:21:17.194 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2771947 ']' 00:21:17.194 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2771947 00:21:17.194 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:21:17.194 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:17.194 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2771947 00:21:17.194 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:17.194 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:17.194 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2771947' 00:21:17.194 killing process with pid 2771947 00:21:17.195 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2771947 00:21:17.195 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2771947 00:21:17.195 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:17.195 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:17.195 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:17.195 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:17.195 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:21:17.195 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:17.195 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:21:17.483 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:17.483 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:17.483 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.483 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:17.483 11:22:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.397 11:22:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:19.397 00:21:19.397 real 0m11.739s 00:21:19.397 user 0m7.152s 00:21:19.397 sys 0m6.440s 00:21:19.397 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:19.397 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:19.397 ************************************ 00:21:19.397 END TEST nvmf_control_msg_list 00:21:19.397 ************************************ 00:21:19.397 11:22:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:19.397 11:22:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:19.397 11:22:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:19.397 11:22:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:19.397 ************************************ 00:21:19.397 START TEST nvmf_wait_for_buf 00:21:19.397 ************************************ 00:21:19.397 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:19.659 * Looking for test storage... 00:21:19.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:19.659 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:19.659 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:21:19.659 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:19.659 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:19.659 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:19.659 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:19.659 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:19.659 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:19.659 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:19.659 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:19.659 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:19.659 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:19.659 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:19.659 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:19.659 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:19.659 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:19.659 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:19.659 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:19.659 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:19.659 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:19.659 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:19.659 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:19.659 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:19.659 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:19.659 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:19.659 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:19.659 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:19.659 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:19.659 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:19.659 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:19.659 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:19.659 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:19.659 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:19.659 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:19.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.659 --rc genhtml_branch_coverage=1 00:21:19.659 --rc genhtml_function_coverage=1 00:21:19.659 --rc genhtml_legend=1 00:21:19.659 --rc geninfo_all_blocks=1 00:21:19.659 --rc geninfo_unexecuted_blocks=1 00:21:19.659 00:21:19.659 ' 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:19.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.660 --rc genhtml_branch_coverage=1 00:21:19.660 --rc genhtml_function_coverage=1 00:21:19.660 --rc genhtml_legend=1 00:21:19.660 --rc geninfo_all_blocks=1 00:21:19.660 --rc geninfo_unexecuted_blocks=1 00:21:19.660 00:21:19.660 ' 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:19.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.660 --rc genhtml_branch_coverage=1 00:21:19.660 --rc genhtml_function_coverage=1 00:21:19.660 --rc genhtml_legend=1 00:21:19.660 --rc geninfo_all_blocks=1 00:21:19.660 --rc geninfo_unexecuted_blocks=1 00:21:19.660 00:21:19.660 ' 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:19.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.660 --rc genhtml_branch_coverage=1 00:21:19.660 --rc genhtml_function_coverage=1 00:21:19.660 --rc genhtml_legend=1 00:21:19.660 --rc geninfo_all_blocks=1 00:21:19.660 --rc geninfo_unexecuted_blocks=1 00:21:19.660 00:21:19.660 ' 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:19.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:19.660 11:22:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:27.807 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:27.807 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:27.807 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:27.807 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.807 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:27.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:27.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:21:27.808 00:21:27.808 --- 10.0.0.2 ping statistics --- 00:21:27.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.808 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:27.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:27.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:21:27.808 00:21:27.808 --- 10.0.0.1 ping statistics --- 00:21:27.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.808 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2776555 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2776555 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2776555 ']' 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:27.808 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:27.808 [2024-11-20 11:22:19.928974] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:21:27.808 [2024-11-20 11:22:19.929040] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:27.808 [2024-11-20 11:22:20.029186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.808 [2024-11-20 11:22:20.081061] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:27.808 [2024-11-20 11:22:20.081110] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:27.808 [2024-11-20 11:22:20.081119] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:27.808 [2024-11-20 11:22:20.081127] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:27.808 [2024-11-20 11:22:20.081133] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:27.808 [2024-11-20 11:22:20.081884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.069 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:28.069 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:21:28.069 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:28.069 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:28.069 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:28.069 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.069 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:28.069 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:28.069 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:28.069 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.069 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:28.069 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.069 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:28.069 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.069 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:28.069 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.069 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:28.069 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.069 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:28.330 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.330 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:28.330 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.330 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:28.330 Malloc0 00:21:28.330 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.330 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:28.330 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.330 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:28.330 [2024-11-20 11:22:20.902555] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.330 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.330 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:28.330 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.330 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:28.330 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.330 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:28.330 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.330 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:28.330 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.330 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:28.330 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.330 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:28.330 [2024-11-20 11:22:20.938858] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:28.330 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.330 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:28.330 [2024-11-20 11:22:21.044282] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:29.716 Initializing NVMe Controllers 00:21:29.716 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:29.716 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:29.717 Initialization complete. Launching workers. 00:21:29.717 ======================================================== 00:21:29.717 Latency(us) 00:21:29.717 Device Information : IOPS MiB/s Average min max 00:21:29.717 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 26.00 3.25 160687.78 8013.36 191553.25 00:21:29.717 ======================================================== 00:21:29.717 Total : 26.00 3.25 160687.78 8013.36 191553.25 00:21:29.717 00:21:29.717 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:29.717 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:29.717 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.717 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:29.717 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.979 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=390 00:21:29.979 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 390 -eq 0 ]] 00:21:29.979 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:29.979 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:29.979 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:29.979 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:29.979 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:29.979 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:29.979 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:29.979 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:29.979 rmmod nvme_tcp 00:21:29.979 rmmod nvme_fabrics 00:21:29.979 rmmod nvme_keyring 00:21:29.979 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:29.979 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:29.979 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:29.979 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2776555 ']' 00:21:29.979 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2776555 00:21:29.979 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2776555 ']' 00:21:29.979 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2776555 00:21:29.979 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:21:29.979 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:29.979 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2776555 00:21:29.979 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:29.979 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:29.979 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2776555' 00:21:29.979 killing process with pid 2776555 00:21:29.979 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2776555 00:21:29.979 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2776555 00:21:30.241 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:30.241 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:30.241 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:30.241 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:30.241 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:21:30.241 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:30.241 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:21:30.241 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:30.241 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:30.241 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.241 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:30.241 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:32.158 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:32.158 00:21:32.158 real 0m12.784s 00:21:32.158 user 0m5.178s 00:21:32.158 sys 0m6.198s 00:21:32.158 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:32.158 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:32.158 ************************************ 00:21:32.158 END TEST nvmf_wait_for_buf 00:21:32.158 ************************************ 00:21:32.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:32.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:32.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:32.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:32.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:32.418 11:22:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:40.569 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:40.569 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:40.569 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:40.569 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:40.569 ************************************ 00:21:40.569 START TEST nvmf_perf_adq 00:21:40.569 ************************************ 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:40.569 * Looking for test storage... 00:21:40.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:40.569 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:40.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.570 --rc genhtml_branch_coverage=1 00:21:40.570 --rc genhtml_function_coverage=1 00:21:40.570 --rc genhtml_legend=1 00:21:40.570 --rc geninfo_all_blocks=1 00:21:40.570 --rc geninfo_unexecuted_blocks=1 00:21:40.570 00:21:40.570 ' 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:40.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.570 --rc genhtml_branch_coverage=1 00:21:40.570 --rc genhtml_function_coverage=1 00:21:40.570 --rc genhtml_legend=1 00:21:40.570 --rc geninfo_all_blocks=1 00:21:40.570 --rc geninfo_unexecuted_blocks=1 00:21:40.570 00:21:40.570 ' 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:40.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.570 --rc genhtml_branch_coverage=1 00:21:40.570 --rc genhtml_function_coverage=1 00:21:40.570 --rc genhtml_legend=1 00:21:40.570 --rc geninfo_all_blocks=1 00:21:40.570 --rc geninfo_unexecuted_blocks=1 00:21:40.570 00:21:40.570 ' 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:40.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.570 --rc genhtml_branch_coverage=1 00:21:40.570 --rc genhtml_function_coverage=1 00:21:40.570 --rc genhtml_legend=1 00:21:40.570 --rc geninfo_all_blocks=1 00:21:40.570 --rc geninfo_unexecuted_blocks=1 00:21:40.570 00:21:40.570 ' 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:40.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:40.570 11:22:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.160 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:47.160 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:47.160 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:47.160 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:47.160 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:47.160 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:47.160 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:47.160 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:47.160 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:47.160 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:47.160 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:47.160 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:47.160 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:47.160 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:47.160 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:47.160 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:47.160 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:47.161 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:47.161 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:47.161 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:47.161 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:47.161 11:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:48.547 11:22:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:50.462 11:22:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:55.757 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:55.757 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:55.757 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:55.757 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:55.758 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:55.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:55.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:21:55.758 00:21:55.758 --- 10.0.0.2 ping statistics --- 00:21:55.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.758 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:55.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:55.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:21:55.758 00:21:55.758 --- 10.0.0.1 ping statistics --- 00:21:55.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.758 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:55.758 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:56.020 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:56.020 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:56.020 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:56.020 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:56.020 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2786789 00:21:56.020 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2786789 00:21:56.020 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:56.020 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2786789 ']' 00:21:56.020 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.020 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:56.020 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.020 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:56.020 11:22:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:56.020 [2024-11-20 11:22:48.590044] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:21:56.020 [2024-11-20 11:22:48.590110] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:56.020 [2024-11-20 11:22:48.692474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:56.020 [2024-11-20 11:22:48.746283] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:56.020 [2024-11-20 11:22:48.746338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:56.020 [2024-11-20 11:22:48.746347] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:56.020 [2024-11-20 11:22:48.746354] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:56.020 [2024-11-20 11:22:48.746360] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:56.020 [2024-11-20 11:22:48.748776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:56.020 [2024-11-20 11:22:48.748937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:56.020 [2024-11-20 11:22:48.749099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.020 [2024-11-20 11:22:48.749099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:56.965 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:56.965 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:56.965 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:56.965 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:56.965 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:56.965 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:56.965 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:56.965 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:56.965 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:56.965 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.966 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:56.966 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.966 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:56.966 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:56.966 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.966 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:56.966 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.966 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:56.966 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.966 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:56.966 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.966 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:56.966 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.966 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:56.966 [2024-11-20 11:22:49.605316] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:56.966 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.966 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:56.966 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.966 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:56.966 Malloc1 00:21:56.966 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.966 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:56.966 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.966 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:56.966 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.966 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:56.966 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.966 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:56.966 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.966 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:56.966 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.966 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:56.966 [2024-11-20 11:22:49.681341] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:56.966 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.966 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2786996 00:21:56.966 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:56.966 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:59.517 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:59.517 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.517 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:59.517 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.517 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:59.517 "tick_rate": 2400000000, 00:21:59.517 "poll_groups": [ 00:21:59.517 { 00:21:59.517 "name": "nvmf_tgt_poll_group_000", 00:21:59.517 "admin_qpairs": 1, 00:21:59.517 "io_qpairs": 1, 00:21:59.517 "current_admin_qpairs": 1, 00:21:59.517 "current_io_qpairs": 1, 00:21:59.517 "pending_bdev_io": 0, 00:21:59.517 "completed_nvme_io": 15507, 00:21:59.517 "transports": [ 00:21:59.517 { 00:21:59.517 "trtype": "TCP" 00:21:59.517 } 00:21:59.517 ] 00:21:59.517 }, 00:21:59.517 { 00:21:59.517 "name": "nvmf_tgt_poll_group_001", 00:21:59.517 "admin_qpairs": 0, 00:21:59.517 "io_qpairs": 1, 00:21:59.517 "current_admin_qpairs": 0, 00:21:59.517 "current_io_qpairs": 1, 00:21:59.517 "pending_bdev_io": 0, 00:21:59.517 "completed_nvme_io": 16388, 00:21:59.517 "transports": [ 00:21:59.517 { 00:21:59.517 "trtype": "TCP" 00:21:59.517 } 00:21:59.517 ] 00:21:59.517 }, 00:21:59.517 { 00:21:59.517 "name": "nvmf_tgt_poll_group_002", 00:21:59.517 "admin_qpairs": 0, 00:21:59.517 "io_qpairs": 1, 00:21:59.517 "current_admin_qpairs": 0, 00:21:59.517 "current_io_qpairs": 1, 00:21:59.517 "pending_bdev_io": 0, 00:21:59.517 "completed_nvme_io": 16539, 00:21:59.517 "transports": [ 00:21:59.517 { 00:21:59.517 "trtype": "TCP" 00:21:59.517 } 00:21:59.517 ] 00:21:59.517 }, 00:21:59.517 { 00:21:59.517 "name": "nvmf_tgt_poll_group_003", 00:21:59.517 "admin_qpairs": 0, 00:21:59.517 "io_qpairs": 1, 00:21:59.517 "current_admin_qpairs": 0, 00:21:59.517 "current_io_qpairs": 1, 00:21:59.517 "pending_bdev_io": 0, 00:21:59.517 "completed_nvme_io": 15575, 00:21:59.517 "transports": [ 00:21:59.517 { 00:21:59.517 "trtype": "TCP" 00:21:59.517 } 00:21:59.517 ] 00:21:59.517 } 00:21:59.517 ] 00:21:59.517 }' 00:21:59.517 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:59.517 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:59.517 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:59.517 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:59.517 11:22:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2786996 00:22:07.657 Initializing NVMe Controllers 00:22:07.657 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:07.657 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:07.657 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:07.657 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:07.657 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:07.657 Initialization complete. Launching workers. 00:22:07.657 ======================================================== 00:22:07.657 Latency(us) 00:22:07.657 Device Information : IOPS MiB/s Average min max 00:22:07.657 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12204.70 47.67 5258.71 1062.01 44695.98 00:22:07.657 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 12871.60 50.28 4971.73 1284.37 15261.02 00:22:07.657 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13152.30 51.38 4865.71 1228.40 15296.92 00:22:07.657 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12872.00 50.28 4971.50 1395.63 12694.74 00:22:07.657 ======================================================== 00:22:07.657 Total : 51100.59 199.61 5012.93 1062.01 44695.98 00:22:07.657 00:22:07.657 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:22:07.657 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:07.657 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:07.657 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:07.657 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:07.657 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:07.657 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:07.657 rmmod nvme_tcp 00:22:07.657 rmmod nvme_fabrics 00:22:07.657 rmmod nvme_keyring 00:22:07.657 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:07.657 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:07.657 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:07.657 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2786789 ']' 00:22:07.657 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2786789 00:22:07.657 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2786789 ']' 00:22:07.657 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2786789 00:22:07.657 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:07.657 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:07.657 11:22:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2786789 00:22:07.657 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:07.657 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:07.657 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2786789' 00:22:07.657 killing process with pid 2786789 00:22:07.657 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2786789 00:22:07.657 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2786789 00:22:07.657 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:07.657 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:07.657 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:07.657 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:07.657 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:07.657 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:07.657 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:07.657 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:07.657 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:07.657 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.657 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:07.657 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.571 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:09.571 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:22:09.571 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:09.571 11:23:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:11.485 11:23:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:13.402 11:23:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:18.697 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:18.697 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:18.697 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:18.697 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:18.697 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:18.697 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:18.697 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.697 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:18.697 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.697 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:18.697 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:18.697 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:18.698 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:18.698 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:18.698 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:18.698 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:18.698 11:23:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:18.698 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:18.698 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:18.698 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:18.698 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:18.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:18.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.707 ms 00:22:18.698 00:22:18.698 --- 10.0.0.2 ping statistics --- 00:22:18.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.698 rtt min/avg/max/mdev = 0.707/0.707/0.707/0.000 ms 00:22:18.698 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:18.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:18.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:22:18.698 00:22:18.699 --- 10.0.0.1 ping statistics --- 00:22:18.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.699 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:22:18.699 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:18.699 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:18.699 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:18.699 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:18.699 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:18.699 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:18.699 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:18.699 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:18.699 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:18.699 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:18.699 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:18.699 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:18.699 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:18.699 net.core.busy_poll = 1 00:22:18.699 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:18.699 net.core.busy_read = 1 00:22:18.699 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:18.699 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:18.699 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:18.699 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:18.699 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:18.699 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:18.699 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:18.699 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:18.699 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:18.699 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2791606 00:22:18.699 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2791606 00:22:18.699 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:18.699 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2791606 ']' 00:22:18.699 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.699 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:18.699 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.699 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:18.699 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:18.961 [2024-11-20 11:23:11.469569] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:22:18.961 [2024-11-20 11:23:11.469636] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:18.961 [2024-11-20 11:23:11.568905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:18.961 [2024-11-20 11:23:11.622144] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:18.961 [2024-11-20 11:23:11.622207] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:18.961 [2024-11-20 11:23:11.622215] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:18.961 [2024-11-20 11:23:11.622222] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:18.961 [2024-11-20 11:23:11.622229] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:18.961 [2024-11-20 11:23:11.624638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:18.961 [2024-11-20 11:23:11.624797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:18.961 [2024-11-20 11:23:11.624957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.961 [2024-11-20 11:23:11.624958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:19.906 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:19.906 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:19.906 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:19.906 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:19.906 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.906 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:19.906 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:19.906 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:19.906 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:19.906 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.906 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.906 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.906 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:19.906 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:19.906 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.906 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.906 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.906 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:19.906 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.906 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.906 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.906 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:19.906 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.906 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.906 [2024-11-20 11:23:12.489428] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:19.906 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.906 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:19.906 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.906 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.906 Malloc1 00:22:19.906 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.906 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:19.907 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.907 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.907 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.907 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:19.907 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.907 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.907 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.907 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:19.907 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.907 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.907 [2024-11-20 11:23:12.569704] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:19.907 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.907 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2791807 00:22:19.907 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:19.907 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:22.447 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:22.447 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.447 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:22.447 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.447 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:22.447 "tick_rate": 2400000000, 00:22:22.447 "poll_groups": [ 00:22:22.447 { 00:22:22.447 "name": "nvmf_tgt_poll_group_000", 00:22:22.447 "admin_qpairs": 1, 00:22:22.447 "io_qpairs": 1, 00:22:22.447 "current_admin_qpairs": 1, 00:22:22.447 "current_io_qpairs": 1, 00:22:22.447 "pending_bdev_io": 0, 00:22:22.447 "completed_nvme_io": 26699, 00:22:22.447 "transports": [ 00:22:22.447 { 00:22:22.447 "trtype": "TCP" 00:22:22.447 } 00:22:22.447 ] 00:22:22.447 }, 00:22:22.447 { 00:22:22.447 "name": "nvmf_tgt_poll_group_001", 00:22:22.447 "admin_qpairs": 0, 00:22:22.447 "io_qpairs": 3, 00:22:22.447 "current_admin_qpairs": 0, 00:22:22.447 "current_io_qpairs": 3, 00:22:22.447 "pending_bdev_io": 0, 00:22:22.447 "completed_nvme_io": 28040, 00:22:22.447 "transports": [ 00:22:22.447 { 00:22:22.447 "trtype": "TCP" 00:22:22.447 } 00:22:22.447 ] 00:22:22.447 }, 00:22:22.447 { 00:22:22.447 "name": "nvmf_tgt_poll_group_002", 00:22:22.447 "admin_qpairs": 0, 00:22:22.447 "io_qpairs": 0, 00:22:22.447 "current_admin_qpairs": 0, 00:22:22.447 "current_io_qpairs": 0, 00:22:22.447 "pending_bdev_io": 0, 00:22:22.447 "completed_nvme_io": 0, 00:22:22.447 "transports": [ 00:22:22.447 { 00:22:22.447 "trtype": "TCP" 00:22:22.447 } 00:22:22.447 ] 00:22:22.447 }, 00:22:22.447 { 00:22:22.447 "name": "nvmf_tgt_poll_group_003", 00:22:22.447 "admin_qpairs": 0, 00:22:22.447 "io_qpairs": 0, 00:22:22.447 "current_admin_qpairs": 0, 00:22:22.447 "current_io_qpairs": 0, 00:22:22.447 "pending_bdev_io": 0, 00:22:22.447 "completed_nvme_io": 0, 00:22:22.447 "transports": [ 00:22:22.447 { 00:22:22.447 "trtype": "TCP" 00:22:22.447 } 00:22:22.447 ] 00:22:22.447 } 00:22:22.447 ] 00:22:22.447 }' 00:22:22.447 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:22.447 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:22.447 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:22:22.447 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:22:22.447 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2791807 00:22:30.727 Initializing NVMe Controllers 00:22:30.727 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:30.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:30.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:30.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:30.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:30.727 Initialization complete. Launching workers. 00:22:30.727 ======================================================== 00:22:30.727 Latency(us) 00:22:30.727 Device Information : IOPS MiB/s Average min max 00:22:30.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5348.20 20.89 12007.75 1241.00 59364.78 00:22:30.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 19045.39 74.40 3359.97 1390.83 46843.64 00:22:30.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9328.50 36.44 6863.27 1214.05 60987.25 00:22:30.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5315.90 20.77 12042.41 1228.46 60976.95 00:22:30.727 ======================================================== 00:22:30.727 Total : 39037.99 152.49 6564.17 1214.05 60987.25 00:22:30.727 00:22:30.727 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:30.727 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:30.727 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:30.727 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:30.727 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:30.727 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:30.727 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:30.727 rmmod nvme_tcp 00:22:30.727 rmmod nvme_fabrics 00:22:30.727 rmmod nvme_keyring 00:22:30.727 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:30.727 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:30.727 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:30.727 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2791606 ']' 00:22:30.727 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2791606 00:22:30.727 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2791606 ']' 00:22:30.727 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2791606 00:22:30.727 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:30.727 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:30.727 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2791606 00:22:30.727 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:30.727 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:30.727 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2791606' 00:22:30.727 killing process with pid 2791606 00:22:30.727 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2791606 00:22:30.727 11:23:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2791606 00:22:30.727 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:30.727 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:30.727 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:30.727 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:30.727 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:30.727 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:30.727 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:30.727 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:30.727 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:30.727 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.727 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:30.727 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:34.062 00:22:34.062 real 0m54.085s 00:22:34.062 user 2m50.239s 00:22:34.062 sys 0m11.603s 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:34.062 ************************************ 00:22:34.062 END TEST nvmf_perf_adq 00:22:34.062 ************************************ 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:34.062 ************************************ 00:22:34.062 START TEST nvmf_shutdown 00:22:34.062 ************************************ 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:34.062 * Looking for test storage... 00:22:34.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:34.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.062 --rc genhtml_branch_coverage=1 00:22:34.062 --rc genhtml_function_coverage=1 00:22:34.062 --rc genhtml_legend=1 00:22:34.062 --rc geninfo_all_blocks=1 00:22:34.062 --rc geninfo_unexecuted_blocks=1 00:22:34.062 00:22:34.062 ' 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:34.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.062 --rc genhtml_branch_coverage=1 00:22:34.062 --rc genhtml_function_coverage=1 00:22:34.062 --rc genhtml_legend=1 00:22:34.062 --rc geninfo_all_blocks=1 00:22:34.062 --rc geninfo_unexecuted_blocks=1 00:22:34.062 00:22:34.062 ' 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:34.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.062 --rc genhtml_branch_coverage=1 00:22:34.062 --rc genhtml_function_coverage=1 00:22:34.062 --rc genhtml_legend=1 00:22:34.062 --rc geninfo_all_blocks=1 00:22:34.062 --rc geninfo_unexecuted_blocks=1 00:22:34.062 00:22:34.062 ' 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:34.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.062 --rc genhtml_branch_coverage=1 00:22:34.062 --rc genhtml_function_coverage=1 00:22:34.062 --rc genhtml_legend=1 00:22:34.062 --rc geninfo_all_blocks=1 00:22:34.062 --rc geninfo_unexecuted_blocks=1 00:22:34.062 00:22:34.062 ' 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.062 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.063 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.063 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.063 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:34.063 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.063 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:34.063 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:34.063 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:34.063 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:34.063 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:34.063 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:34.063 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:34.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:34.063 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:34.063 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:34.063 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:34.063 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:34.063 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:34.063 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:34.063 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:34.063 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:34.063 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:34.063 ************************************ 00:22:34.063 START TEST nvmf_shutdown_tc1 00:22:34.063 ************************************ 00:22:34.063 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:22:34.063 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:34.063 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:34.063 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:34.063 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:34.063 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:34.063 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:34.063 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:34.063 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.063 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.063 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.063 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:34.063 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:34.063 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:34.063 11:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:42.285 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:42.285 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:42.285 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:42.285 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:42.285 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:42.285 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:42.285 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:42.285 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:42.285 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:42.285 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:42.285 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:42.285 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:42.285 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:42.285 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:42.285 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:42.285 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:42.285 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:42.285 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:42.285 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:42.285 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:42.285 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:42.285 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:42.285 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:42.285 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:42.285 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:42.285 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:42.285 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:42.286 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:42.286 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:42.286 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:42.286 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:42.286 11:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:42.286 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:42.286 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:42.286 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:42.286 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:42.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:42.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:22:42.286 00:22:42.286 --- 10.0.0.2 ping statistics --- 00:22:42.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.286 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:22:42.286 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:42.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:42.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:22:42.286 00:22:42.286 --- 10.0.0.1 ping statistics --- 00:22:42.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.286 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:22:42.286 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:42.286 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:42.286 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:42.286 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:42.286 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:42.286 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:42.286 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:42.286 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:42.286 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:42.286 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:42.286 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:42.286 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:42.286 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:42.286 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2798382 00:22:42.286 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2798382 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2798382 ']' 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:42.287 [2024-11-20 11:23:34.180430] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:22:42.287 [2024-11-20 11:23:34.180495] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:42.287 [2024-11-20 11:23:34.254811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:42.287 [2024-11-20 11:23:34.302293] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:42.287 [2024-11-20 11:23:34.302340] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:42.287 [2024-11-20 11:23:34.302347] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:42.287 [2024-11-20 11:23:34.302353] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:42.287 [2024-11-20 11:23:34.302357] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:42.287 [2024-11-20 11:23:34.304469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:42.287 [2024-11-20 11:23:34.304696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:42.287 [2024-11-20 11:23:34.304863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:42.287 [2024-11-20 11:23:34.304863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:42.287 [2024-11-20 11:23:34.460848] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.287 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:42.287 Malloc1 00:22:42.287 [2024-11-20 11:23:34.587750] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:42.287 Malloc2 00:22:42.287 Malloc3 00:22:42.287 Malloc4 00:22:42.287 Malloc5 00:22:42.287 Malloc6 00:22:42.287 Malloc7 00:22:42.287 Malloc8 00:22:42.287 Malloc9 00:22:42.287 Malloc10 00:22:42.287 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.287 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:42.287 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:42.287 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:42.549 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2798509 00:22:42.549 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2798509 /var/tmp/bdevperf.sock 00:22:42.549 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2798509 ']' 00:22:42.549 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:42.549 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:42.549 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:42.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:42.549 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:42.549 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:42.549 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:42.549 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:42.549 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:42.549 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:42.549 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.549 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.549 { 00:22:42.549 "params": { 00:22:42.549 "name": "Nvme$subsystem", 00:22:42.549 "trtype": "$TEST_TRANSPORT", 00:22:42.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.549 "adrfam": "ipv4", 00:22:42.549 "trsvcid": "$NVMF_PORT", 00:22:42.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.549 "hdgst": ${hdgst:-false}, 00:22:42.549 "ddgst": ${ddgst:-false} 00:22:42.549 }, 00:22:42.549 "method": "bdev_nvme_attach_controller" 00:22:42.549 } 00:22:42.549 EOF 00:22:42.549 )") 00:22:42.549 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:42.549 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.549 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.549 { 00:22:42.549 "params": { 00:22:42.549 "name": "Nvme$subsystem", 00:22:42.549 "trtype": "$TEST_TRANSPORT", 00:22:42.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.549 "adrfam": "ipv4", 00:22:42.549 "trsvcid": "$NVMF_PORT", 00:22:42.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.549 "hdgst": ${hdgst:-false}, 00:22:42.549 "ddgst": ${ddgst:-false} 00:22:42.549 }, 00:22:42.549 "method": "bdev_nvme_attach_controller" 00:22:42.549 } 00:22:42.549 EOF 00:22:42.549 )") 00:22:42.549 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:42.549 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.549 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.549 { 00:22:42.549 "params": { 00:22:42.549 "name": "Nvme$subsystem", 00:22:42.549 "trtype": "$TEST_TRANSPORT", 00:22:42.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.549 "adrfam": "ipv4", 00:22:42.549 "trsvcid": "$NVMF_PORT", 00:22:42.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.549 "hdgst": ${hdgst:-false}, 00:22:42.549 "ddgst": ${ddgst:-false} 00:22:42.549 }, 00:22:42.549 "method": "bdev_nvme_attach_controller" 00:22:42.549 } 00:22:42.549 EOF 00:22:42.549 )") 00:22:42.549 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:42.549 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.549 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.549 { 00:22:42.549 "params": { 00:22:42.549 "name": "Nvme$subsystem", 00:22:42.549 "trtype": "$TEST_TRANSPORT", 00:22:42.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.549 "adrfam": "ipv4", 00:22:42.549 "trsvcid": "$NVMF_PORT", 00:22:42.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.549 "hdgst": ${hdgst:-false}, 00:22:42.549 "ddgst": ${ddgst:-false} 00:22:42.549 }, 00:22:42.549 "method": "bdev_nvme_attach_controller" 00:22:42.549 } 00:22:42.549 EOF 00:22:42.549 )") 00:22:42.549 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:42.549 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.549 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.549 { 00:22:42.549 "params": { 00:22:42.549 "name": "Nvme$subsystem", 00:22:42.549 "trtype": "$TEST_TRANSPORT", 00:22:42.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.549 "adrfam": "ipv4", 00:22:42.549 "trsvcid": "$NVMF_PORT", 00:22:42.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.549 "hdgst": ${hdgst:-false}, 00:22:42.549 "ddgst": ${ddgst:-false} 00:22:42.549 }, 00:22:42.549 "method": "bdev_nvme_attach_controller" 00:22:42.549 } 00:22:42.549 EOF 00:22:42.549 )") 00:22:42.549 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:42.549 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.549 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.549 { 00:22:42.549 "params": { 00:22:42.549 "name": "Nvme$subsystem", 00:22:42.549 "trtype": "$TEST_TRANSPORT", 00:22:42.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.549 "adrfam": "ipv4", 00:22:42.549 "trsvcid": "$NVMF_PORT", 00:22:42.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.549 "hdgst": ${hdgst:-false}, 00:22:42.549 "ddgst": ${ddgst:-false} 00:22:42.549 }, 00:22:42.549 "method": "bdev_nvme_attach_controller" 00:22:42.549 } 00:22:42.549 EOF 00:22:42.549 )") 00:22:42.549 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:42.549 [2024-11-20 11:23:35.097150] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:22:42.549 [2024-11-20 11:23:35.097229] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:42.549 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.550 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.550 { 00:22:42.550 "params": { 00:22:42.550 "name": "Nvme$subsystem", 00:22:42.550 "trtype": "$TEST_TRANSPORT", 00:22:42.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.550 "adrfam": "ipv4", 00:22:42.550 "trsvcid": "$NVMF_PORT", 00:22:42.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.550 "hdgst": ${hdgst:-false}, 00:22:42.550 "ddgst": ${ddgst:-false} 00:22:42.550 }, 00:22:42.550 "method": "bdev_nvme_attach_controller" 00:22:42.550 } 00:22:42.550 EOF 00:22:42.550 )") 00:22:42.550 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:42.550 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.550 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.550 { 00:22:42.550 "params": { 00:22:42.550 "name": "Nvme$subsystem", 00:22:42.550 "trtype": "$TEST_TRANSPORT", 00:22:42.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.550 "adrfam": "ipv4", 00:22:42.550 "trsvcid": "$NVMF_PORT", 00:22:42.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.550 "hdgst": ${hdgst:-false}, 00:22:42.550 "ddgst": ${ddgst:-false} 00:22:42.550 }, 00:22:42.550 "method": "bdev_nvme_attach_controller" 00:22:42.550 } 00:22:42.550 EOF 00:22:42.550 )") 00:22:42.550 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:42.550 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.550 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.550 { 00:22:42.550 "params": { 00:22:42.550 "name": "Nvme$subsystem", 00:22:42.550 "trtype": "$TEST_TRANSPORT", 00:22:42.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.550 "adrfam": "ipv4", 00:22:42.550 "trsvcid": "$NVMF_PORT", 00:22:42.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.550 "hdgst": ${hdgst:-false}, 00:22:42.550 "ddgst": ${ddgst:-false} 00:22:42.550 }, 00:22:42.550 "method": "bdev_nvme_attach_controller" 00:22:42.550 } 00:22:42.550 EOF 00:22:42.550 )") 00:22:42.550 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:42.550 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:42.550 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:42.550 { 00:22:42.550 "params": { 00:22:42.550 "name": "Nvme$subsystem", 00:22:42.550 "trtype": "$TEST_TRANSPORT", 00:22:42.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:42.550 "adrfam": "ipv4", 00:22:42.550 "trsvcid": "$NVMF_PORT", 00:22:42.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:42.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:42.550 "hdgst": ${hdgst:-false}, 00:22:42.550 "ddgst": ${ddgst:-false} 00:22:42.550 }, 00:22:42.550 "method": "bdev_nvme_attach_controller" 00:22:42.550 } 00:22:42.550 EOF 00:22:42.550 )") 00:22:42.550 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:42.550 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:42.550 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:42.550 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:42.550 "params": { 00:22:42.550 "name": "Nvme1", 00:22:42.550 "trtype": "tcp", 00:22:42.550 "traddr": "10.0.0.2", 00:22:42.550 "adrfam": "ipv4", 00:22:42.550 "trsvcid": "4420", 00:22:42.550 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:42.550 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:42.550 "hdgst": false, 00:22:42.550 "ddgst": false 00:22:42.550 }, 00:22:42.550 "method": "bdev_nvme_attach_controller" 00:22:42.550 },{ 00:22:42.550 "params": { 00:22:42.550 "name": "Nvme2", 00:22:42.550 "trtype": "tcp", 00:22:42.550 "traddr": "10.0.0.2", 00:22:42.550 "adrfam": "ipv4", 00:22:42.550 "trsvcid": "4420", 00:22:42.550 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:42.550 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:42.550 "hdgst": false, 00:22:42.550 "ddgst": false 00:22:42.550 }, 00:22:42.550 "method": "bdev_nvme_attach_controller" 00:22:42.550 },{ 00:22:42.550 "params": { 00:22:42.550 "name": "Nvme3", 00:22:42.550 "trtype": "tcp", 00:22:42.550 "traddr": "10.0.0.2", 00:22:42.550 "adrfam": "ipv4", 00:22:42.550 "trsvcid": "4420", 00:22:42.550 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:42.550 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:42.550 "hdgst": false, 00:22:42.550 "ddgst": false 00:22:42.550 }, 00:22:42.550 "method": "bdev_nvme_attach_controller" 00:22:42.550 },{ 00:22:42.550 "params": { 00:22:42.550 "name": "Nvme4", 00:22:42.550 "trtype": "tcp", 00:22:42.550 "traddr": "10.0.0.2", 00:22:42.550 "adrfam": "ipv4", 00:22:42.550 "trsvcid": "4420", 00:22:42.550 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:42.550 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:42.550 "hdgst": false, 00:22:42.550 "ddgst": false 00:22:42.550 }, 00:22:42.550 "method": "bdev_nvme_attach_controller" 00:22:42.550 },{ 00:22:42.550 "params": { 00:22:42.550 "name": "Nvme5", 00:22:42.550 "trtype": "tcp", 00:22:42.550 "traddr": "10.0.0.2", 00:22:42.550 "adrfam": "ipv4", 00:22:42.550 "trsvcid": "4420", 00:22:42.550 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:42.550 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:42.550 "hdgst": false, 00:22:42.550 "ddgst": false 00:22:42.550 }, 00:22:42.550 "method": "bdev_nvme_attach_controller" 00:22:42.550 },{ 00:22:42.550 "params": { 00:22:42.550 "name": "Nvme6", 00:22:42.550 "trtype": "tcp", 00:22:42.550 "traddr": "10.0.0.2", 00:22:42.550 "adrfam": "ipv4", 00:22:42.550 "trsvcid": "4420", 00:22:42.550 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:42.550 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:42.550 "hdgst": false, 00:22:42.550 "ddgst": false 00:22:42.550 }, 00:22:42.550 "method": "bdev_nvme_attach_controller" 00:22:42.550 },{ 00:22:42.550 "params": { 00:22:42.550 "name": "Nvme7", 00:22:42.550 "trtype": "tcp", 00:22:42.550 "traddr": "10.0.0.2", 00:22:42.550 "adrfam": "ipv4", 00:22:42.550 "trsvcid": "4420", 00:22:42.550 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:42.550 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:42.550 "hdgst": false, 00:22:42.550 "ddgst": false 00:22:42.550 }, 00:22:42.550 "method": "bdev_nvme_attach_controller" 00:22:42.550 },{ 00:22:42.550 "params": { 00:22:42.550 "name": "Nvme8", 00:22:42.550 "trtype": "tcp", 00:22:42.550 "traddr": "10.0.0.2", 00:22:42.550 "adrfam": "ipv4", 00:22:42.551 "trsvcid": "4420", 00:22:42.551 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:42.551 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:42.551 "hdgst": false, 00:22:42.551 "ddgst": false 00:22:42.551 }, 00:22:42.551 "method": "bdev_nvme_attach_controller" 00:22:42.551 },{ 00:22:42.551 "params": { 00:22:42.551 "name": "Nvme9", 00:22:42.551 "trtype": "tcp", 00:22:42.551 "traddr": "10.0.0.2", 00:22:42.551 "adrfam": "ipv4", 00:22:42.551 "trsvcid": "4420", 00:22:42.551 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:42.551 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:42.551 "hdgst": false, 00:22:42.551 "ddgst": false 00:22:42.551 }, 00:22:42.551 "method": "bdev_nvme_attach_controller" 00:22:42.551 },{ 00:22:42.551 "params": { 00:22:42.551 "name": "Nvme10", 00:22:42.551 "trtype": "tcp", 00:22:42.551 "traddr": "10.0.0.2", 00:22:42.551 "adrfam": "ipv4", 00:22:42.551 "trsvcid": "4420", 00:22:42.551 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:42.551 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:42.551 "hdgst": false, 00:22:42.551 "ddgst": false 00:22:42.551 }, 00:22:42.551 "method": "bdev_nvme_attach_controller" 00:22:42.551 }' 00:22:42.551 [2024-11-20 11:23:35.192563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.551 [2024-11-20 11:23:35.246089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.936 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:43.936 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:43.936 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:43.936 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.936 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:43.936 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.936 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2798509 00:22:43.936 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:43.936 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:44.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2798509 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:44.878 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2798382 00:22:44.878 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:44.878 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:44.878 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:44.878 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:44.878 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.878 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.878 { 00:22:44.878 "params": { 00:22:44.878 "name": "Nvme$subsystem", 00:22:44.878 "trtype": "$TEST_TRANSPORT", 00:22:44.878 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.878 "adrfam": "ipv4", 00:22:44.878 "trsvcid": "$NVMF_PORT", 00:22:44.878 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.878 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.878 "hdgst": ${hdgst:-false}, 00:22:44.878 "ddgst": ${ddgst:-false} 00:22:44.878 }, 00:22:44.878 "method": "bdev_nvme_attach_controller" 00:22:44.878 } 00:22:44.878 EOF 00:22:44.878 )") 00:22:44.878 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:44.878 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.878 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.878 { 00:22:44.878 "params": { 00:22:44.879 "name": "Nvme$subsystem", 00:22:44.879 "trtype": "$TEST_TRANSPORT", 00:22:44.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.879 "adrfam": "ipv4", 00:22:44.879 "trsvcid": "$NVMF_PORT", 00:22:44.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.879 "hdgst": ${hdgst:-false}, 00:22:44.879 "ddgst": ${ddgst:-false} 00:22:44.879 }, 00:22:44.879 "method": "bdev_nvme_attach_controller" 00:22:44.879 } 00:22:44.879 EOF 00:22:44.879 )") 00:22:44.879 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:44.879 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.879 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.879 { 00:22:44.879 "params": { 00:22:44.879 "name": "Nvme$subsystem", 00:22:44.879 "trtype": "$TEST_TRANSPORT", 00:22:44.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.879 "adrfam": "ipv4", 00:22:44.879 "trsvcid": "$NVMF_PORT", 00:22:44.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.879 "hdgst": ${hdgst:-false}, 00:22:44.879 "ddgst": ${ddgst:-false} 00:22:44.879 }, 00:22:44.879 "method": "bdev_nvme_attach_controller" 00:22:44.879 } 00:22:44.879 EOF 00:22:44.879 )") 00:22:44.879 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:44.879 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.879 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.879 { 00:22:44.879 "params": { 00:22:44.879 "name": "Nvme$subsystem", 00:22:44.879 "trtype": "$TEST_TRANSPORT", 00:22:44.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.879 "adrfam": "ipv4", 00:22:44.879 "trsvcid": "$NVMF_PORT", 00:22:44.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.879 "hdgst": ${hdgst:-false}, 00:22:44.879 "ddgst": ${ddgst:-false} 00:22:44.879 }, 00:22:44.879 "method": "bdev_nvme_attach_controller" 00:22:44.879 } 00:22:44.879 EOF 00:22:44.879 )") 00:22:44.879 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:44.879 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.879 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.879 { 00:22:44.879 "params": { 00:22:44.879 "name": "Nvme$subsystem", 00:22:44.879 "trtype": "$TEST_TRANSPORT", 00:22:44.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.879 "adrfam": "ipv4", 00:22:44.879 "trsvcid": "$NVMF_PORT", 00:22:44.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.879 "hdgst": ${hdgst:-false}, 00:22:44.879 "ddgst": ${ddgst:-false} 00:22:44.879 }, 00:22:44.879 "method": "bdev_nvme_attach_controller" 00:22:44.879 } 00:22:44.879 EOF 00:22:44.879 )") 00:22:44.879 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:44.879 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.879 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.879 { 00:22:44.879 "params": { 00:22:44.879 "name": "Nvme$subsystem", 00:22:44.879 "trtype": "$TEST_TRANSPORT", 00:22:44.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.879 "adrfam": "ipv4", 00:22:44.879 "trsvcid": "$NVMF_PORT", 00:22:44.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.879 "hdgst": ${hdgst:-false}, 00:22:44.879 "ddgst": ${ddgst:-false} 00:22:44.879 }, 00:22:44.879 "method": "bdev_nvme_attach_controller" 00:22:44.879 } 00:22:44.879 EOF 00:22:44.879 )") 00:22:44.879 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:44.879 [2024-11-20 11:23:37.551343] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:22:44.879 [2024-11-20 11:23:37.551393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2799050 ] 00:22:44.879 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.879 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.879 { 00:22:44.879 "params": { 00:22:44.879 "name": "Nvme$subsystem", 00:22:44.879 "trtype": "$TEST_TRANSPORT", 00:22:44.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.879 "adrfam": "ipv4", 00:22:44.879 "trsvcid": "$NVMF_PORT", 00:22:44.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.879 "hdgst": ${hdgst:-false}, 00:22:44.879 "ddgst": ${ddgst:-false} 00:22:44.879 }, 00:22:44.879 "method": "bdev_nvme_attach_controller" 00:22:44.879 } 00:22:44.879 EOF 00:22:44.879 )") 00:22:44.879 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:44.879 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.879 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.879 { 00:22:44.879 "params": { 00:22:44.879 "name": "Nvme$subsystem", 00:22:44.879 "trtype": "$TEST_TRANSPORT", 00:22:44.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.879 "adrfam": "ipv4", 00:22:44.879 "trsvcid": "$NVMF_PORT", 00:22:44.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.879 "hdgst": ${hdgst:-false}, 00:22:44.879 "ddgst": ${ddgst:-false} 00:22:44.879 }, 00:22:44.879 "method": "bdev_nvme_attach_controller" 00:22:44.879 } 00:22:44.879 EOF 00:22:44.879 )") 00:22:44.879 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:44.879 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.879 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.879 { 00:22:44.879 "params": { 00:22:44.879 "name": "Nvme$subsystem", 00:22:44.879 "trtype": "$TEST_TRANSPORT", 00:22:44.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.879 "adrfam": "ipv4", 00:22:44.879 "trsvcid": "$NVMF_PORT", 00:22:44.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.879 "hdgst": ${hdgst:-false}, 00:22:44.879 "ddgst": ${ddgst:-false} 00:22:44.879 }, 00:22:44.879 "method": "bdev_nvme_attach_controller" 00:22:44.879 } 00:22:44.879 EOF 00:22:44.879 )") 00:22:44.879 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:44.879 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:44.879 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:44.879 { 00:22:44.879 "params": { 00:22:44.879 "name": "Nvme$subsystem", 00:22:44.879 "trtype": "$TEST_TRANSPORT", 00:22:44.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.879 "adrfam": "ipv4", 00:22:44.879 "trsvcid": "$NVMF_PORT", 00:22:44.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.879 "hdgst": ${hdgst:-false}, 00:22:44.879 "ddgst": ${ddgst:-false} 00:22:44.879 }, 00:22:44.879 "method": "bdev_nvme_attach_controller" 00:22:44.879 } 00:22:44.879 EOF 00:22:44.879 )") 00:22:44.879 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:44.879 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:44.879 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:44.879 11:23:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:44.879 "params": { 00:22:44.879 "name": "Nvme1", 00:22:44.879 "trtype": "tcp", 00:22:44.879 "traddr": "10.0.0.2", 00:22:44.879 "adrfam": "ipv4", 00:22:44.879 "trsvcid": "4420", 00:22:44.879 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.879 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:44.879 "hdgst": false, 00:22:44.879 "ddgst": false 00:22:44.879 }, 00:22:44.879 "method": "bdev_nvme_attach_controller" 00:22:44.879 },{ 00:22:44.879 "params": { 00:22:44.879 "name": "Nvme2", 00:22:44.879 "trtype": "tcp", 00:22:44.879 "traddr": "10.0.0.2", 00:22:44.879 "adrfam": "ipv4", 00:22:44.879 "trsvcid": "4420", 00:22:44.879 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:44.879 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:44.879 "hdgst": false, 00:22:44.879 "ddgst": false 00:22:44.879 }, 00:22:44.879 "method": "bdev_nvme_attach_controller" 00:22:44.879 },{ 00:22:44.879 "params": { 00:22:44.879 "name": "Nvme3", 00:22:44.879 "trtype": "tcp", 00:22:44.879 "traddr": "10.0.0.2", 00:22:44.879 "adrfam": "ipv4", 00:22:44.879 "trsvcid": "4420", 00:22:44.879 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:44.879 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:44.879 "hdgst": false, 00:22:44.879 "ddgst": false 00:22:44.880 }, 00:22:44.880 "method": "bdev_nvme_attach_controller" 00:22:44.880 },{ 00:22:44.880 "params": { 00:22:44.880 "name": "Nvme4", 00:22:44.880 "trtype": "tcp", 00:22:44.880 "traddr": "10.0.0.2", 00:22:44.880 "adrfam": "ipv4", 00:22:44.880 "trsvcid": "4420", 00:22:44.880 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:44.880 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:44.880 "hdgst": false, 00:22:44.880 "ddgst": false 00:22:44.880 }, 00:22:44.880 "method": "bdev_nvme_attach_controller" 00:22:44.880 },{ 00:22:44.880 "params": { 00:22:44.880 "name": "Nvme5", 00:22:44.880 "trtype": "tcp", 00:22:44.880 "traddr": "10.0.0.2", 00:22:44.880 "adrfam": "ipv4", 00:22:44.880 "trsvcid": "4420", 00:22:44.880 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:44.880 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:44.880 "hdgst": false, 00:22:44.880 "ddgst": false 00:22:44.880 }, 00:22:44.880 "method": "bdev_nvme_attach_controller" 00:22:44.880 },{ 00:22:44.880 "params": { 00:22:44.880 "name": "Nvme6", 00:22:44.880 "trtype": "tcp", 00:22:44.880 "traddr": "10.0.0.2", 00:22:44.880 "adrfam": "ipv4", 00:22:44.880 "trsvcid": "4420", 00:22:44.880 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:44.880 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:44.880 "hdgst": false, 00:22:44.880 "ddgst": false 00:22:44.880 }, 00:22:44.880 "method": "bdev_nvme_attach_controller" 00:22:44.880 },{ 00:22:44.880 "params": { 00:22:44.880 "name": "Nvme7", 00:22:44.880 "trtype": "tcp", 00:22:44.880 "traddr": "10.0.0.2", 00:22:44.880 "adrfam": "ipv4", 00:22:44.880 "trsvcid": "4420", 00:22:44.880 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:44.880 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:44.880 "hdgst": false, 00:22:44.880 "ddgst": false 00:22:44.880 }, 00:22:44.880 "method": "bdev_nvme_attach_controller" 00:22:44.880 },{ 00:22:44.880 "params": { 00:22:44.880 "name": "Nvme8", 00:22:44.880 "trtype": "tcp", 00:22:44.880 "traddr": "10.0.0.2", 00:22:44.880 "adrfam": "ipv4", 00:22:44.880 "trsvcid": "4420", 00:22:44.880 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:44.880 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:44.880 "hdgst": false, 00:22:44.880 "ddgst": false 00:22:44.880 }, 00:22:44.880 "method": "bdev_nvme_attach_controller" 00:22:44.880 },{ 00:22:44.880 "params": { 00:22:44.880 "name": "Nvme9", 00:22:44.880 "trtype": "tcp", 00:22:44.880 "traddr": "10.0.0.2", 00:22:44.880 "adrfam": "ipv4", 00:22:44.880 "trsvcid": "4420", 00:22:44.880 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:44.880 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:44.880 "hdgst": false, 00:22:44.880 "ddgst": false 00:22:44.880 }, 00:22:44.880 "method": "bdev_nvme_attach_controller" 00:22:44.880 },{ 00:22:44.880 "params": { 00:22:44.880 "name": "Nvme10", 00:22:44.880 "trtype": "tcp", 00:22:44.880 "traddr": "10.0.0.2", 00:22:44.880 "adrfam": "ipv4", 00:22:44.880 "trsvcid": "4420", 00:22:44.880 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:44.880 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:44.880 "hdgst": false, 00:22:44.880 "ddgst": false 00:22:44.880 }, 00:22:44.880 "method": "bdev_nvme_attach_controller" 00:22:44.880 }' 00:22:45.141 [2024-11-20 11:23:37.641578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.141 [2024-11-20 11:23:37.677417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.527 Running I/O for 1 seconds... 00:22:47.470 1856.00 IOPS, 116.00 MiB/s 00:22:47.470 Latency(us) 00:22:47.470 [2024-11-20T10:23:40.212Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.470 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.470 Verification LBA range: start 0x0 length 0x400 00:22:47.470 Nvme1n1 : 1.11 230.48 14.40 0.00 0.00 274785.92 19770.03 234181.97 00:22:47.470 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.470 Verification LBA range: start 0x0 length 0x400 00:22:47.470 Nvme2n1 : 1.16 219.89 13.74 0.00 0.00 283456.85 19442.35 249910.61 00:22:47.470 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.470 Verification LBA range: start 0x0 length 0x400 00:22:47.470 Nvme3n1 : 1.12 228.45 14.28 0.00 0.00 267678.29 17148.59 248162.99 00:22:47.470 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.470 Verification LBA range: start 0x0 length 0x400 00:22:47.470 Nvme4n1 : 1.17 272.69 17.04 0.00 0.00 220544.77 8847.36 251658.24 00:22:47.470 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.470 Verification LBA range: start 0x0 length 0x400 00:22:47.470 Nvme5n1 : 1.13 226.64 14.16 0.00 0.00 260144.21 39103.15 242920.11 00:22:47.470 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.471 Verification LBA range: start 0x0 length 0x400 00:22:47.471 Nvme6n1 : 1.14 225.15 14.07 0.00 0.00 257128.75 16056.32 253405.87 00:22:47.471 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.471 Verification LBA range: start 0x0 length 0x400 00:22:47.471 Nvme7n1 : 1.13 225.70 14.11 0.00 0.00 251648.43 17257.81 232434.35 00:22:47.471 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.471 Verification LBA range: start 0x0 length 0x400 00:22:47.471 Nvme8n1 : 1.18 270.79 16.92 0.00 0.00 206884.01 15510.19 249910.61 00:22:47.471 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.471 Verification LBA range: start 0x0 length 0x400 00:22:47.471 Nvme9n1 : 1.19 268.56 16.78 0.00 0.00 204945.66 13216.43 248162.99 00:22:47.471 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:47.471 Verification LBA range: start 0x0 length 0x400 00:22:47.471 Nvme10n1 : 1.18 223.92 14.00 0.00 0.00 240505.56 996.69 288358.40 00:22:47.471 [2024-11-20T10:23:40.213Z] =================================================================================================================== 00:22:47.471 [2024-11-20T10:23:40.213Z] Total : 2392.26 149.52 0.00 0.00 244251.07 996.69 288358.40 00:22:47.732 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:47.732 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:47.732 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:47.732 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:47.732 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:47.732 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:47.732 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:47.732 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:47.732 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:47.732 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:47.732 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:47.732 rmmod nvme_tcp 00:22:47.732 rmmod nvme_fabrics 00:22:47.732 rmmod nvme_keyring 00:22:47.732 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:47.732 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:47.732 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:47.732 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2798382 ']' 00:22:47.732 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2798382 00:22:47.732 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2798382 ']' 00:22:47.732 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2798382 00:22:47.732 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:22:47.732 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:47.732 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2798382 00:22:47.732 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:47.732 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:47.732 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2798382' 00:22:47.732 killing process with pid 2798382 00:22:47.732 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2798382 00:22:47.732 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2798382 00:22:47.993 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:47.993 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:47.993 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:47.993 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:47.993 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:22:47.993 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:47.993 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:22:47.993 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:47.993 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:47.993 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.993 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.993 11:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.540 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:50.540 00:22:50.540 real 0m16.152s 00:22:50.540 user 0m30.844s 00:22:50.540 sys 0m6.959s 00:22:50.540 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:50.540 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:50.540 ************************************ 00:22:50.540 END TEST nvmf_shutdown_tc1 00:22:50.540 ************************************ 00:22:50.540 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:50.540 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:50.540 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:50.540 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:50.540 ************************************ 00:22:50.540 START TEST nvmf_shutdown_tc2 00:22:50.540 ************************************ 00:22:50.540 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:22:50.540 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:50.540 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:50.540 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:50.540 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:50.540 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:50.540 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:50.540 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:50.540 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.540 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:50.540 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.540 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:50.540 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:50.540 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:50.540 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:50.540 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:50.540 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:50.540 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:50.540 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:50.540 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:50.540 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:50.540 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:50.540 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:50.540 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:50.540 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:50.540 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:50.540 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:50.540 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:50.540 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:50.541 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:50.541 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:50.541 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:50.541 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:50.541 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:50.542 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:50.542 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:50.542 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:50.542 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:50.542 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:50.542 11:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:50.542 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:50.542 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:50.542 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:50.542 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:50.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:50.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.487 ms 00:22:50.542 00:22:50.542 --- 10.0.0.2 ping statistics --- 00:22:50.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.542 rtt min/avg/max/mdev = 0.487/0.487/0.487/0.000 ms 00:22:50.542 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:50.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:50.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.338 ms 00:22:50.542 00:22:50.542 --- 10.0.0.1 ping statistics --- 00:22:50.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:50.542 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:22:50.542 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:50.542 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:22:50.542 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:50.542 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:50.542 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:50.542 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:50.542 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:50.542 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:50.542 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:50.542 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:50.542 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:50.542 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:50.542 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:50.542 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2800310 00:22:50.542 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2800310 00:22:50.542 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2800310 ']' 00:22:50.542 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:50.542 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.542 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:50.542 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.542 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:50.542 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:50.542 [2024-11-20 11:23:43.192061] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:22:50.542 [2024-11-20 11:23:43.192123] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:50.802 [2024-11-20 11:23:43.284122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:50.802 [2024-11-20 11:23:43.314565] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:50.802 [2024-11-20 11:23:43.314592] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:50.802 [2024-11-20 11:23:43.314598] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:50.802 [2024-11-20 11:23:43.314603] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:50.802 [2024-11-20 11:23:43.314607] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:50.802 [2024-11-20 11:23:43.316063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.802 [2024-11-20 11:23:43.316218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:50.802 [2024-11-20 11:23:43.316499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.802 [2024-11-20 11:23:43.316499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:51.372 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:51.372 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:51.372 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:51.372 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:51.372 11:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:51.372 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.372 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:51.372 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.372 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:51.372 [2024-11-20 11:23:44.040675] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.372 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.372 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:51.372 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:51.372 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:51.372 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:51.372 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:51.372 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.372 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:51.372 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.372 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:51.372 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.372 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:51.373 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.373 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:51.373 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.373 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:51.373 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.373 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:51.373 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.373 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:51.373 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.373 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:51.373 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.373 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:51.373 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:51.373 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:51.373 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:51.373 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.373 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:51.633 Malloc1 00:22:51.633 [2024-11-20 11:23:44.154301] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.633 Malloc2 00:22:51.633 Malloc3 00:22:51.633 Malloc4 00:22:51.633 Malloc5 00:22:51.633 Malloc6 00:22:51.633 Malloc7 00:22:51.897 Malloc8 00:22:51.897 Malloc9 00:22:51.897 Malloc10 00:22:51.897 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.897 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:51.897 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:51.897 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:51.897 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2800538 00:22:51.897 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2800538 /var/tmp/bdevperf.sock 00:22:51.897 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2800538 ']' 00:22:51.897 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:51.897 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:51.897 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:51.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:51.897 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:51.897 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:51.897 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:51.897 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:51.897 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:22:51.897 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:22:51.897 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:51.897 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:51.897 { 00:22:51.897 "params": { 00:22:51.897 "name": "Nvme$subsystem", 00:22:51.897 "trtype": "$TEST_TRANSPORT", 00:22:51.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.897 "adrfam": "ipv4", 00:22:51.897 "trsvcid": "$NVMF_PORT", 00:22:51.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.897 "hdgst": ${hdgst:-false}, 00:22:51.897 "ddgst": ${ddgst:-false} 00:22:51.897 }, 00:22:51.897 "method": "bdev_nvme_attach_controller" 00:22:51.897 } 00:22:51.897 EOF 00:22:51.897 )") 00:22:51.897 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:51.897 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:51.897 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:51.897 { 00:22:51.897 "params": { 00:22:51.897 "name": "Nvme$subsystem", 00:22:51.897 "trtype": "$TEST_TRANSPORT", 00:22:51.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.897 "adrfam": "ipv4", 00:22:51.897 "trsvcid": "$NVMF_PORT", 00:22:51.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.897 "hdgst": ${hdgst:-false}, 00:22:51.897 "ddgst": ${ddgst:-false} 00:22:51.897 }, 00:22:51.897 "method": "bdev_nvme_attach_controller" 00:22:51.897 } 00:22:51.897 EOF 00:22:51.897 )") 00:22:51.897 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:51.897 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:51.897 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:51.897 { 00:22:51.897 "params": { 00:22:51.897 "name": "Nvme$subsystem", 00:22:51.897 "trtype": "$TEST_TRANSPORT", 00:22:51.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.897 "adrfam": "ipv4", 00:22:51.897 "trsvcid": "$NVMF_PORT", 00:22:51.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.897 "hdgst": ${hdgst:-false}, 00:22:51.897 "ddgst": ${ddgst:-false} 00:22:51.897 }, 00:22:51.897 "method": "bdev_nvme_attach_controller" 00:22:51.897 } 00:22:51.897 EOF 00:22:51.897 )") 00:22:51.897 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:51.897 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:51.897 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:51.897 { 00:22:51.897 "params": { 00:22:51.897 "name": "Nvme$subsystem", 00:22:51.897 "trtype": "$TEST_TRANSPORT", 00:22:51.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.897 "adrfam": "ipv4", 00:22:51.897 "trsvcid": "$NVMF_PORT", 00:22:51.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.897 "hdgst": ${hdgst:-false}, 00:22:51.897 "ddgst": ${ddgst:-false} 00:22:51.897 }, 00:22:51.897 "method": "bdev_nvme_attach_controller" 00:22:51.897 } 00:22:51.897 EOF 00:22:51.897 )") 00:22:51.897 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:51.897 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:51.897 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:51.897 { 00:22:51.897 "params": { 00:22:51.897 "name": "Nvme$subsystem", 00:22:51.897 "trtype": "$TEST_TRANSPORT", 00:22:51.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.897 "adrfam": "ipv4", 00:22:51.897 "trsvcid": "$NVMF_PORT", 00:22:51.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.897 "hdgst": ${hdgst:-false}, 00:22:51.897 "ddgst": ${ddgst:-false} 00:22:51.897 }, 00:22:51.897 "method": "bdev_nvme_attach_controller" 00:22:51.897 } 00:22:51.897 EOF 00:22:51.897 )") 00:22:51.897 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:51.897 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:51.897 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:51.897 { 00:22:51.897 "params": { 00:22:51.897 "name": "Nvme$subsystem", 00:22:51.897 "trtype": "$TEST_TRANSPORT", 00:22:51.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.897 "adrfam": "ipv4", 00:22:51.897 "trsvcid": "$NVMF_PORT", 00:22:51.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.897 "hdgst": ${hdgst:-false}, 00:22:51.897 "ddgst": ${ddgst:-false} 00:22:51.897 }, 00:22:51.897 "method": "bdev_nvme_attach_controller" 00:22:51.897 } 00:22:51.897 EOF 00:22:51.897 )") 00:22:51.897 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:51.897 [2024-11-20 11:23:44.597984] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:22:51.897 [2024-11-20 11:23:44.598038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2800538 ] 00:22:51.897 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:51.897 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:51.897 { 00:22:51.897 "params": { 00:22:51.897 "name": "Nvme$subsystem", 00:22:51.897 "trtype": "$TEST_TRANSPORT", 00:22:51.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.897 "adrfam": "ipv4", 00:22:51.897 "trsvcid": "$NVMF_PORT", 00:22:51.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.897 "hdgst": ${hdgst:-false}, 00:22:51.897 "ddgst": ${ddgst:-false} 00:22:51.897 }, 00:22:51.897 "method": "bdev_nvme_attach_controller" 00:22:51.897 } 00:22:51.897 EOF 00:22:51.897 )") 00:22:51.898 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:51.898 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:51.898 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:51.898 { 00:22:51.898 "params": { 00:22:51.898 "name": "Nvme$subsystem", 00:22:51.898 "trtype": "$TEST_TRANSPORT", 00:22:51.898 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.898 "adrfam": "ipv4", 00:22:51.898 "trsvcid": "$NVMF_PORT", 00:22:51.898 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.898 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.898 "hdgst": ${hdgst:-false}, 00:22:51.898 "ddgst": ${ddgst:-false} 00:22:51.898 }, 00:22:51.898 "method": "bdev_nvme_attach_controller" 00:22:51.898 } 00:22:51.898 EOF 00:22:51.898 )") 00:22:51.898 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:51.898 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:51.898 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:51.898 { 00:22:51.898 "params": { 00:22:51.898 "name": "Nvme$subsystem", 00:22:51.898 "trtype": "$TEST_TRANSPORT", 00:22:51.898 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.898 "adrfam": "ipv4", 00:22:51.898 "trsvcid": "$NVMF_PORT", 00:22:51.898 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.898 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.898 "hdgst": ${hdgst:-false}, 00:22:51.898 "ddgst": ${ddgst:-false} 00:22:51.898 }, 00:22:51.898 "method": "bdev_nvme_attach_controller" 00:22:51.898 } 00:22:51.898 EOF 00:22:51.898 )") 00:22:51.898 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:51.898 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:51.898 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:51.898 { 00:22:51.898 "params": { 00:22:51.898 "name": "Nvme$subsystem", 00:22:51.898 "trtype": "$TEST_TRANSPORT", 00:22:51.898 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:51.898 "adrfam": "ipv4", 00:22:51.898 "trsvcid": "$NVMF_PORT", 00:22:51.898 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:51.898 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:51.898 "hdgst": ${hdgst:-false}, 00:22:51.898 "ddgst": ${ddgst:-false} 00:22:51.898 }, 00:22:51.898 "method": "bdev_nvme_attach_controller" 00:22:51.898 } 00:22:51.898 EOF 00:22:51.898 )") 00:22:51.898 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:51.898 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:22:51.898 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:22:51.898 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:51.898 "params": { 00:22:51.898 "name": "Nvme1", 00:22:51.898 "trtype": "tcp", 00:22:51.898 "traddr": "10.0.0.2", 00:22:51.898 "adrfam": "ipv4", 00:22:51.898 "trsvcid": "4420", 00:22:51.898 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.898 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:51.898 "hdgst": false, 00:22:51.898 "ddgst": false 00:22:51.898 }, 00:22:51.898 "method": "bdev_nvme_attach_controller" 00:22:51.898 },{ 00:22:51.898 "params": { 00:22:51.898 "name": "Nvme2", 00:22:51.898 "trtype": "tcp", 00:22:51.898 "traddr": "10.0.0.2", 00:22:51.898 "adrfam": "ipv4", 00:22:51.898 "trsvcid": "4420", 00:22:51.898 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:51.898 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:51.898 "hdgst": false, 00:22:51.898 "ddgst": false 00:22:51.898 }, 00:22:51.898 "method": "bdev_nvme_attach_controller" 00:22:51.898 },{ 00:22:51.898 "params": { 00:22:51.898 "name": "Nvme3", 00:22:51.898 "trtype": "tcp", 00:22:51.898 "traddr": "10.0.0.2", 00:22:51.898 "adrfam": "ipv4", 00:22:51.898 "trsvcid": "4420", 00:22:51.898 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:51.898 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:51.898 "hdgst": false, 00:22:51.898 "ddgst": false 00:22:51.898 }, 00:22:51.898 "method": "bdev_nvme_attach_controller" 00:22:51.898 },{ 00:22:51.898 "params": { 00:22:51.898 "name": "Nvme4", 00:22:51.898 "trtype": "tcp", 00:22:51.898 "traddr": "10.0.0.2", 00:22:51.898 "adrfam": "ipv4", 00:22:51.898 "trsvcid": "4420", 00:22:51.898 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:51.898 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:51.898 "hdgst": false, 00:22:51.898 "ddgst": false 00:22:51.898 }, 00:22:51.898 "method": "bdev_nvme_attach_controller" 00:22:51.898 },{ 00:22:51.898 "params": { 00:22:51.898 "name": "Nvme5", 00:22:51.898 "trtype": "tcp", 00:22:51.898 "traddr": "10.0.0.2", 00:22:51.898 "adrfam": "ipv4", 00:22:51.898 "trsvcid": "4420", 00:22:51.898 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:51.898 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:51.898 "hdgst": false, 00:22:51.898 "ddgst": false 00:22:51.898 }, 00:22:51.898 "method": "bdev_nvme_attach_controller" 00:22:51.898 },{ 00:22:51.898 "params": { 00:22:51.898 "name": "Nvme6", 00:22:51.898 "trtype": "tcp", 00:22:51.898 "traddr": "10.0.0.2", 00:22:51.898 "adrfam": "ipv4", 00:22:51.898 "trsvcid": "4420", 00:22:51.898 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:51.898 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:51.898 "hdgst": false, 00:22:51.898 "ddgst": false 00:22:51.898 }, 00:22:51.898 "method": "bdev_nvme_attach_controller" 00:22:51.898 },{ 00:22:51.898 "params": { 00:22:51.898 "name": "Nvme7", 00:22:51.898 "trtype": "tcp", 00:22:51.898 "traddr": "10.0.0.2", 00:22:51.898 "adrfam": "ipv4", 00:22:51.898 "trsvcid": "4420", 00:22:51.898 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:51.898 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:51.898 "hdgst": false, 00:22:51.898 "ddgst": false 00:22:51.898 }, 00:22:51.898 "method": "bdev_nvme_attach_controller" 00:22:51.898 },{ 00:22:51.898 "params": { 00:22:51.898 "name": "Nvme8", 00:22:51.898 "trtype": "tcp", 00:22:51.898 "traddr": "10.0.0.2", 00:22:51.898 "adrfam": "ipv4", 00:22:51.898 "trsvcid": "4420", 00:22:51.898 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:51.898 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:51.898 "hdgst": false, 00:22:51.898 "ddgst": false 00:22:51.898 }, 00:22:51.898 "method": "bdev_nvme_attach_controller" 00:22:51.898 },{ 00:22:51.898 "params": { 00:22:51.898 "name": "Nvme9", 00:22:51.898 "trtype": "tcp", 00:22:51.898 "traddr": "10.0.0.2", 00:22:51.898 "adrfam": "ipv4", 00:22:51.898 "trsvcid": "4420", 00:22:51.898 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:51.898 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:51.898 "hdgst": false, 00:22:51.898 "ddgst": false 00:22:51.898 }, 00:22:51.898 "method": "bdev_nvme_attach_controller" 00:22:51.898 },{ 00:22:51.898 "params": { 00:22:51.898 "name": "Nvme10", 00:22:51.898 "trtype": "tcp", 00:22:51.898 "traddr": "10.0.0.2", 00:22:51.898 "adrfam": "ipv4", 00:22:51.898 "trsvcid": "4420", 00:22:51.898 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:51.898 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:51.898 "hdgst": false, 00:22:51.898 "ddgst": false 00:22:51.898 }, 00:22:51.898 "method": "bdev_nvme_attach_controller" 00:22:51.898 }' 00:22:52.158 [2024-11-20 11:23:44.686920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.159 [2024-11-20 11:23:44.723089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.073 Running I/O for 10 seconds... 00:22:54.073 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:54.073 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:54.073 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:54.073 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.073 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:54.073 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.073 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:54.073 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:54.073 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:54.073 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:54.073 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:54.073 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:54.073 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:54.073 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:54.073 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:54.073 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.073 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:54.073 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.073 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:54.073 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:54.073 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:54.334 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:54.334 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:54.334 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:54.334 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:54.334 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.334 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:54.334 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.334 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:54.334 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:54.334 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:54.595 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:54.595 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:54.595 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:54.595 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:54.595 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.595 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:54.595 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.595 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:54.595 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:54.595 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:54.595 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:54.595 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:54.595 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2800538 00:22:54.595 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2800538 ']' 00:22:54.595 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2800538 00:22:54.595 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:54.596 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:54.596 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2800538 00:22:54.596 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:54.596 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:54.596 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2800538' 00:22:54.596 killing process with pid 2800538 00:22:54.596 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2800538 00:22:54.596 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2800538 00:22:54.856 Received shutdown signal, test time was about 0.965123 seconds 00:22:54.856 00:22:54.856 Latency(us) 00:22:54.856 [2024-11-20T10:23:47.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.856 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.856 Verification LBA range: start 0x0 length 0x400 00:22:54.856 Nvme1n1 : 0.94 205.20 12.83 0.00 0.00 308222.86 34515.63 249910.61 00:22:54.856 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.856 Verification LBA range: start 0x0 length 0x400 00:22:54.856 Nvme2n1 : 0.95 268.19 16.76 0.00 0.00 231100.80 19333.12 244667.73 00:22:54.856 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.856 Verification LBA range: start 0x0 length 0x400 00:22:54.856 Nvme3n1 : 0.95 269.41 16.84 0.00 0.00 225400.75 21189.97 248162.99 00:22:54.856 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.856 Verification LBA range: start 0x0 length 0x400 00:22:54.856 Nvme4n1 : 0.96 266.57 16.66 0.00 0.00 223106.56 22063.79 249910.61 00:22:54.856 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.856 Verification LBA range: start 0x0 length 0x400 00:22:54.856 Nvme5n1 : 0.92 207.90 12.99 0.00 0.00 278996.48 23374.51 284863.15 00:22:54.856 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.856 Verification LBA range: start 0x0 length 0x400 00:22:54.856 Nvme6n1 : 0.96 265.50 16.59 0.00 0.00 214550.61 19223.89 249910.61 00:22:54.856 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.856 Verification LBA range: start 0x0 length 0x400 00:22:54.856 Nvme7n1 : 0.96 267.45 16.72 0.00 0.00 208128.64 20862.29 222822.40 00:22:54.856 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.856 Verification LBA range: start 0x0 length 0x400 00:22:54.857 Nvme8n1 : 0.94 217.92 13.62 0.00 0.00 246225.41 5160.96 253405.87 00:22:54.857 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.857 Verification LBA range: start 0x0 length 0x400 00:22:54.857 Nvme9n1 : 0.95 201.84 12.61 0.00 0.00 262401.99 16384.00 283115.52 00:22:54.857 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:54.857 Verification LBA range: start 0x0 length 0x400 00:22:54.857 Nvme10n1 : 0.94 218.21 13.64 0.00 0.00 233278.60 5406.72 220200.96 00:22:54.857 [2024-11-20T10:23:47.599Z] =================================================================================================================== 00:22:54.857 [2024-11-20T10:23:47.599Z] Total : 2388.18 149.26 0.00 0.00 239896.10 5160.96 284863.15 00:22:54.857 11:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:55.797 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2800310 00:22:55.797 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:55.797 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:55.797 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:55.797 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:55.797 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:55.797 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:55.797 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:55.797 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:55.797 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:55.797 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:55.797 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:55.797 rmmod nvme_tcp 00:22:55.797 rmmod nvme_fabrics 00:22:55.797 rmmod nvme_keyring 00:22:55.797 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:55.797 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:56.057 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:56.058 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2800310 ']' 00:22:56.058 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2800310 00:22:56.058 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2800310 ']' 00:22:56.058 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2800310 00:22:56.058 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:56.058 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:56.058 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2800310 00:22:56.058 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:56.058 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:56.058 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2800310' 00:22:56.058 killing process with pid 2800310 00:22:56.058 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2800310 00:22:56.058 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2800310 00:22:56.319 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:56.319 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:56.319 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:56.319 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:56.319 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:22:56.319 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:56.319 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:22:56.320 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:56.320 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:56.320 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.320 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:56.320 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.233 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:58.233 00:22:58.233 real 0m8.155s 00:22:58.233 user 0m25.141s 00:22:58.233 sys 0m1.277s 00:22:58.233 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:58.233 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:58.233 ************************************ 00:22:58.233 END TEST nvmf_shutdown_tc2 00:22:58.233 ************************************ 00:22:58.233 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:58.233 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:58.233 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:58.233 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:58.494 ************************************ 00:22:58.494 START TEST nvmf_shutdown_tc3 00:22:58.494 ************************************ 00:22:58.494 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:22:58.494 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:58.494 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:58.494 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:58.494 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:58.494 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:58.494 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:58.494 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:58.494 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.494 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:58.494 11:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.494 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:58.494 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:58.494 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:58.494 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:58.494 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:58.494 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:58.494 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:58.494 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:58.494 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:58.494 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:58.494 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:58.494 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:58.494 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:58.494 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:58.494 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:58.494 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:58.494 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:58.494 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:58.494 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:58.494 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:58.494 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:58.495 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:58.495 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:58.495 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:58.495 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:58.495 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:58.756 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:58.756 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:58.756 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:58.756 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:58.756 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:58.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:22:58.756 00:22:58.756 --- 10.0.0.2 ping statistics --- 00:22:58.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.756 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:22:58.756 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:58.756 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:58.756 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:22:58.756 00:22:58.756 --- 10.0.0.1 ping statistics --- 00:22:58.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.756 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:22:58.756 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:58.756 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:22:58.756 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:58.756 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:58.756 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:58.756 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:58.756 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:58.756 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:58.756 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:58.756 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:58.756 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:58.756 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:58.756 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:58.756 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2801919 00:22:58.756 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2801919 00:22:58.756 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2801919 ']' 00:22:58.756 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:58.756 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.757 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:58.757 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.757 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:58.757 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:58.757 [2024-11-20 11:23:51.431302] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:22:58.757 [2024-11-20 11:23:51.431363] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:59.017 [2024-11-20 11:23:51.528608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:59.017 [2024-11-20 11:23:51.560016] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:59.017 [2024-11-20 11:23:51.560046] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:59.017 [2024-11-20 11:23:51.560052] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:59.017 [2024-11-20 11:23:51.560056] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:59.017 [2024-11-20 11:23:51.560060] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:59.017 [2024-11-20 11:23:51.561248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.017 [2024-11-20 11:23:51.561505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:59.017 [2024-11-20 11:23:51.561621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.017 [2024-11-20 11:23:51.561622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:59.588 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:59.588 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:59.588 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:59.588 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:59.588 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:59.588 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:59.588 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:59.588 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.588 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:59.588 [2024-11-20 11:23:52.274433] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:59.588 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.588 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:59.588 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:59.588 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:59.588 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:59.588 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:59.588 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:59.588 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:59.588 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:59.588 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:59.588 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:59.588 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:59.588 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:59.588 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:59.588 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:59.588 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:59.588 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:59.588 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:59.588 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:59.588 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:59.588 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:59.588 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:59.848 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:59.848 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:59.848 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:59.848 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:59.848 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:59.848 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.848 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:59.848 Malloc1 00:22:59.848 [2024-11-20 11:23:52.381485] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:59.848 Malloc2 00:22:59.848 Malloc3 00:22:59.848 Malloc4 00:22:59.848 Malloc5 00:22:59.848 Malloc6 00:22:59.848 Malloc7 00:23:00.109 Malloc8 00:23:00.109 Malloc9 00:23:00.109 Malloc10 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2802225 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2802225 /var/tmp/bdevperf.sock 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2802225 ']' 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:00.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:00.109 { 00:23:00.109 "params": { 00:23:00.109 "name": "Nvme$subsystem", 00:23:00.109 "trtype": "$TEST_TRANSPORT", 00:23:00.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.109 "adrfam": "ipv4", 00:23:00.109 "trsvcid": "$NVMF_PORT", 00:23:00.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.109 "hdgst": ${hdgst:-false}, 00:23:00.109 "ddgst": ${ddgst:-false} 00:23:00.109 }, 00:23:00.109 "method": "bdev_nvme_attach_controller" 00:23:00.109 } 00:23:00.109 EOF 00:23:00.109 )") 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:00.109 { 00:23:00.109 "params": { 00:23:00.109 "name": "Nvme$subsystem", 00:23:00.109 "trtype": "$TEST_TRANSPORT", 00:23:00.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.109 "adrfam": "ipv4", 00:23:00.109 "trsvcid": "$NVMF_PORT", 00:23:00.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.109 "hdgst": ${hdgst:-false}, 00:23:00.109 "ddgst": ${ddgst:-false} 00:23:00.109 }, 00:23:00.109 "method": "bdev_nvme_attach_controller" 00:23:00.109 } 00:23:00.109 EOF 00:23:00.109 )") 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:00.109 { 00:23:00.109 "params": { 00:23:00.109 "name": "Nvme$subsystem", 00:23:00.109 "trtype": "$TEST_TRANSPORT", 00:23:00.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.109 "adrfam": "ipv4", 00:23:00.109 "trsvcid": "$NVMF_PORT", 00:23:00.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.109 "hdgst": ${hdgst:-false}, 00:23:00.109 "ddgst": ${ddgst:-false} 00:23:00.109 }, 00:23:00.109 "method": "bdev_nvme_attach_controller" 00:23:00.109 } 00:23:00.109 EOF 00:23:00.109 )") 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:00.109 { 00:23:00.109 "params": { 00:23:00.109 "name": "Nvme$subsystem", 00:23:00.109 "trtype": "$TEST_TRANSPORT", 00:23:00.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.109 "adrfam": "ipv4", 00:23:00.109 "trsvcid": "$NVMF_PORT", 00:23:00.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.109 "hdgst": ${hdgst:-false}, 00:23:00.109 "ddgst": ${ddgst:-false} 00:23:00.109 }, 00:23:00.109 "method": "bdev_nvme_attach_controller" 00:23:00.109 } 00:23:00.109 EOF 00:23:00.109 )") 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:00.109 { 00:23:00.109 "params": { 00:23:00.109 "name": "Nvme$subsystem", 00:23:00.109 "trtype": "$TEST_TRANSPORT", 00:23:00.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.109 "adrfam": "ipv4", 00:23:00.109 "trsvcid": "$NVMF_PORT", 00:23:00.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.109 "hdgst": ${hdgst:-false}, 00:23:00.109 "ddgst": ${ddgst:-false} 00:23:00.109 }, 00:23:00.109 "method": "bdev_nvme_attach_controller" 00:23:00.109 } 00:23:00.109 EOF 00:23:00.109 )") 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:00.109 { 00:23:00.109 "params": { 00:23:00.109 "name": "Nvme$subsystem", 00:23:00.109 "trtype": "$TEST_TRANSPORT", 00:23:00.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.109 "adrfam": "ipv4", 00:23:00.109 "trsvcid": "$NVMF_PORT", 00:23:00.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.109 "hdgst": ${hdgst:-false}, 00:23:00.109 "ddgst": ${ddgst:-false} 00:23:00.109 }, 00:23:00.109 "method": "bdev_nvme_attach_controller" 00:23:00.109 } 00:23:00.109 EOF 00:23:00.109 )") 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:00.109 [2024-11-20 11:23:52.827899] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:23:00.109 [2024-11-20 11:23:52.827953] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2802225 ] 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:00.109 { 00:23:00.109 "params": { 00:23:00.109 "name": "Nvme$subsystem", 00:23:00.109 "trtype": "$TEST_TRANSPORT", 00:23:00.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.109 "adrfam": "ipv4", 00:23:00.109 "trsvcid": "$NVMF_PORT", 00:23:00.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.109 "hdgst": ${hdgst:-false}, 00:23:00.109 "ddgst": ${ddgst:-false} 00:23:00.109 }, 00:23:00.109 "method": "bdev_nvme_attach_controller" 00:23:00.109 } 00:23:00.109 EOF 00:23:00.109 )") 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:00.109 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:00.109 { 00:23:00.109 "params": { 00:23:00.109 "name": "Nvme$subsystem", 00:23:00.109 "trtype": "$TEST_TRANSPORT", 00:23:00.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.109 "adrfam": "ipv4", 00:23:00.109 "trsvcid": "$NVMF_PORT", 00:23:00.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.109 "hdgst": ${hdgst:-false}, 00:23:00.110 "ddgst": ${ddgst:-false} 00:23:00.110 }, 00:23:00.110 "method": "bdev_nvme_attach_controller" 00:23:00.110 } 00:23:00.110 EOF 00:23:00.110 )") 00:23:00.110 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:00.110 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:00.110 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:00.110 { 00:23:00.110 "params": { 00:23:00.110 "name": "Nvme$subsystem", 00:23:00.110 "trtype": "$TEST_TRANSPORT", 00:23:00.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.110 "adrfam": "ipv4", 00:23:00.110 "trsvcid": "$NVMF_PORT", 00:23:00.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.110 "hdgst": ${hdgst:-false}, 00:23:00.110 "ddgst": ${ddgst:-false} 00:23:00.110 }, 00:23:00.110 "method": "bdev_nvme_attach_controller" 00:23:00.110 } 00:23:00.110 EOF 00:23:00.110 )") 00:23:00.370 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:00.370 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:00.370 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:00.370 { 00:23:00.370 "params": { 00:23:00.370 "name": "Nvme$subsystem", 00:23:00.370 "trtype": "$TEST_TRANSPORT", 00:23:00.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.370 "adrfam": "ipv4", 00:23:00.370 "trsvcid": "$NVMF_PORT", 00:23:00.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.370 "hdgst": ${hdgst:-false}, 00:23:00.370 "ddgst": ${ddgst:-false} 00:23:00.370 }, 00:23:00.370 "method": "bdev_nvme_attach_controller" 00:23:00.370 } 00:23:00.370 EOF 00:23:00.370 )") 00:23:00.370 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:00.370 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:23:00.370 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:23:00.370 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:00.370 "params": { 00:23:00.370 "name": "Nvme1", 00:23:00.370 "trtype": "tcp", 00:23:00.370 "traddr": "10.0.0.2", 00:23:00.370 "adrfam": "ipv4", 00:23:00.370 "trsvcid": "4420", 00:23:00.370 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.370 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:00.370 "hdgst": false, 00:23:00.370 "ddgst": false 00:23:00.370 }, 00:23:00.370 "method": "bdev_nvme_attach_controller" 00:23:00.370 },{ 00:23:00.370 "params": { 00:23:00.370 "name": "Nvme2", 00:23:00.370 "trtype": "tcp", 00:23:00.370 "traddr": "10.0.0.2", 00:23:00.370 "adrfam": "ipv4", 00:23:00.370 "trsvcid": "4420", 00:23:00.370 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:00.370 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:00.370 "hdgst": false, 00:23:00.370 "ddgst": false 00:23:00.370 }, 00:23:00.370 "method": "bdev_nvme_attach_controller" 00:23:00.370 },{ 00:23:00.370 "params": { 00:23:00.370 "name": "Nvme3", 00:23:00.370 "trtype": "tcp", 00:23:00.370 "traddr": "10.0.0.2", 00:23:00.370 "adrfam": "ipv4", 00:23:00.370 "trsvcid": "4420", 00:23:00.370 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:00.370 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:00.370 "hdgst": false, 00:23:00.370 "ddgst": false 00:23:00.370 }, 00:23:00.370 "method": "bdev_nvme_attach_controller" 00:23:00.370 },{ 00:23:00.370 "params": { 00:23:00.370 "name": "Nvme4", 00:23:00.370 "trtype": "tcp", 00:23:00.370 "traddr": "10.0.0.2", 00:23:00.370 "adrfam": "ipv4", 00:23:00.370 "trsvcid": "4420", 00:23:00.370 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:00.370 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:00.370 "hdgst": false, 00:23:00.370 "ddgst": false 00:23:00.370 }, 00:23:00.370 "method": "bdev_nvme_attach_controller" 00:23:00.370 },{ 00:23:00.370 "params": { 00:23:00.370 "name": "Nvme5", 00:23:00.370 "trtype": "tcp", 00:23:00.370 "traddr": "10.0.0.2", 00:23:00.370 "adrfam": "ipv4", 00:23:00.370 "trsvcid": "4420", 00:23:00.370 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:00.370 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:00.370 "hdgst": false, 00:23:00.370 "ddgst": false 00:23:00.370 }, 00:23:00.370 "method": "bdev_nvme_attach_controller" 00:23:00.370 },{ 00:23:00.370 "params": { 00:23:00.370 "name": "Nvme6", 00:23:00.370 "trtype": "tcp", 00:23:00.370 "traddr": "10.0.0.2", 00:23:00.370 "adrfam": "ipv4", 00:23:00.370 "trsvcid": "4420", 00:23:00.370 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:00.370 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:00.370 "hdgst": false, 00:23:00.370 "ddgst": false 00:23:00.370 }, 00:23:00.370 "method": "bdev_nvme_attach_controller" 00:23:00.370 },{ 00:23:00.370 "params": { 00:23:00.370 "name": "Nvme7", 00:23:00.370 "trtype": "tcp", 00:23:00.370 "traddr": "10.0.0.2", 00:23:00.370 "adrfam": "ipv4", 00:23:00.371 "trsvcid": "4420", 00:23:00.371 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:00.371 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:00.371 "hdgst": false, 00:23:00.371 "ddgst": false 00:23:00.371 }, 00:23:00.371 "method": "bdev_nvme_attach_controller" 00:23:00.371 },{ 00:23:00.371 "params": { 00:23:00.371 "name": "Nvme8", 00:23:00.371 "trtype": "tcp", 00:23:00.371 "traddr": "10.0.0.2", 00:23:00.371 "adrfam": "ipv4", 00:23:00.371 "trsvcid": "4420", 00:23:00.371 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:00.371 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:00.371 "hdgst": false, 00:23:00.371 "ddgst": false 00:23:00.371 }, 00:23:00.371 "method": "bdev_nvme_attach_controller" 00:23:00.371 },{ 00:23:00.371 "params": { 00:23:00.371 "name": "Nvme9", 00:23:00.371 "trtype": "tcp", 00:23:00.371 "traddr": "10.0.0.2", 00:23:00.371 "adrfam": "ipv4", 00:23:00.371 "trsvcid": "4420", 00:23:00.371 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:00.371 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:00.371 "hdgst": false, 00:23:00.371 "ddgst": false 00:23:00.371 }, 00:23:00.371 "method": "bdev_nvme_attach_controller" 00:23:00.371 },{ 00:23:00.371 "params": { 00:23:00.371 "name": "Nvme10", 00:23:00.371 "trtype": "tcp", 00:23:00.371 "traddr": "10.0.0.2", 00:23:00.371 "adrfam": "ipv4", 00:23:00.371 "trsvcid": "4420", 00:23:00.371 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:00.371 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:00.371 "hdgst": false, 00:23:00.371 "ddgst": false 00:23:00.371 }, 00:23:00.371 "method": "bdev_nvme_attach_controller" 00:23:00.371 }' 00:23:00.371 [2024-11-20 11:23:52.917850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.371 [2024-11-20 11:23:52.954086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.281 Running I/O for 10 seconds... 00:23:02.851 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:02.851 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:02.851 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:02.851 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.851 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:02.851 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.851 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:02.851 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:02.851 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:02.851 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:02.851 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:23:02.851 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:23:02.851 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:02.851 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:02.851 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:02.851 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:02.851 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.851 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:02.851 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.851 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:02.851 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:02.851 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:03.126 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:03.126 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:03.126 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:03.126 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:03.126 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.126 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:03.126 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.126 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:03.126 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:03.126 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:23:03.126 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:23:03.126 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:23:03.126 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2801919 00:23:03.126 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2801919 ']' 00:23:03.126 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2801919 00:23:03.126 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:23:03.126 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:03.126 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2801919 00:23:03.126 1859.00 IOPS, 116.19 MiB/s [2024-11-20T10:23:55.868Z] 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:03.126 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:03.126 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2801919' 00:23:03.126 killing process with pid 2801919 00:23:03.126 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2801919 00:23:03.126 11:23:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2801919 00:23:03.126 [2024-11-20 11:23:55.771853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df110 is same with the state(6) to be set 00:23:03.126 [2024-11-20 11:23:55.771927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df110 is same with the state(6) to be set 00:23:03.126 [2024-11-20 11:23:55.772976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.126 [2024-11-20 11:23:55.773006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.126 [2024-11-20 11:23:55.773012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.126 [2024-11-20 11:23:55.773018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.126 [2024-11-20 11:23:55.773023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.126 [2024-11-20 11:23:55.773029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.126 [2024-11-20 11:23:55.773034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.126 [2024-11-20 11:23:55.773039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.126 [2024-11-20 11:23:55.773044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.126 [2024-11-20 11:23:55.773049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.126 [2024-11-20 11:23:55.773053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.126 [2024-11-20 11:23:55.773058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.126 [2024-11-20 11:23:55.773063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.126 [2024-11-20 11:23:55.773067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.126 [2024-11-20 11:23:55.773072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.126 [2024-11-20 11:23:55.773077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.126 [2024-11-20 11:23:55.773081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.126 [2024-11-20 11:23:55.773086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.126 [2024-11-20 11:23:55.773092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.126 [2024-11-20 11:23:55.773097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.126 [2024-11-20 11:23:55.773102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.773107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.773111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.773116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.773121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.773126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.773136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.773141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.773147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.773151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.773156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.773164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.773169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.773173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.773178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.773183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.773187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.773192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.773197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.773201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.773206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.773212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.773216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.773221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.773226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.773230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.773235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.773240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.773244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.773248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.773253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.773257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.773263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.773272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d9b0 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.127 [2024-11-20 11:23:55.774559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.774564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.774569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.774575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.774580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.774585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.774589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.774594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.774598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.774603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.774609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.774613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df600 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.775999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.776003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.776008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dfad0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.776890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.776912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.776919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.776924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.776930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.776935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.776940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.776945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.128 [2024-11-20 11:23:55.776949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.776954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.776963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.776969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.776975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.776980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.776985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.776990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.776995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.776999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dffc0 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.129 [2024-11-20 11:23:55.777897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.777903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.777908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.777913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.777918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.777922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.777927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.777931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.777936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.777941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.777946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.777951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.777955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.777960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.777964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.777969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.777974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.777979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.777984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.777989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.777994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.777999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.778003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.778008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.778013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.778017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.778022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.778026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.778032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.778037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0490 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.778643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0960 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.130 [2024-11-20 11:23:55.779411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.779416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.779420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.779425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.779430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.779436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.779441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.779446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.779450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.779455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.779459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.779464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.779468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.779473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.779477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.779482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.779489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0e30 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.780375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.131 [2024-11-20 11:23:55.786589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.131 [2024-11-20 11:23:55.786623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.131 [2024-11-20 11:23:55.786634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.132 [2024-11-20 11:23:55.786641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.132 [2024-11-20 11:23:55.786649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.132 [2024-11-20 11:23:55.786657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.132 [2024-11-20 11:23:55.786665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.132 [2024-11-20 11:23:55.786673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.132 [2024-11-20 11:23:55.786680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e610 is same with the state(6) to be set 00:23:03.132 [2024-11-20 11:23:55.786712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.132 [2024-11-20 11:23:55.786721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.132 [2024-11-20 11:23:55.786730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.132 [2024-11-20 11:23:55.786737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.132 [2024-11-20 11:23:55.786750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.132 [2024-11-20 11:23:55.786758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.132 [2024-11-20 11:23:55.786766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.132 [2024-11-20 11:23:55.786773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.132 [2024-11-20 11:23:55.786780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182a8a0 is same with the state(6) to be set 00:23:03.132 [2024-11-20 11:23:55.786814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.132 [2024-11-20 11:23:55.786823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.132 [2024-11-20 11:23:55.786832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.132 [2024-11-20 11:23:55.786839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.132 [2024-11-20 11:23:55.786847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.132 [2024-11-20 11:23:55.786855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.132 [2024-11-20 11:23:55.786863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.132 [2024-11-20 11:23:55.786870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.132 [2024-11-20 11:23:55.786877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fd9f0 is same with the state(6) to be set 00:23:03.132 [2024-11-20 11:23:55.786901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.132 [2024-11-20 11:23:55.786910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.132 [2024-11-20 11:23:55.786919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.132 [2024-11-20 11:23:55.786927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.132 [2024-11-20 11:23:55.786935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.132 [2024-11-20 11:23:55.786942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.132 [2024-11-20 11:23:55.786950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.132 [2024-11-20 11:23:55.786958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.132 [2024-11-20 11:23:55.786966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1406cb0 is same with the state(6) to be set 00:23:03.132 [2024-11-20 11:23:55.786988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.132 [2024-11-20 11:23:55.786997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.132 [2024-11-20 11:23:55.787012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.132 [2024-11-20 11:23:55.787020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.132 [2024-11-20 11:23:55.787028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.132 [2024-11-20 11:23:55.787036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.132 [2024-11-20 11:23:55.787044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.132 [2024-11-20 11:23:55.787051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.132 [2024-11-20 11:23:55.787058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1403420 is same with the state(6) to be set 00:23:03.132 [2024-11-20 11:23:55.787082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.132 [2024-11-20 11:23:55.787090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.132 [2024-11-20 11:23:55.787099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.132 [2024-11-20 11:23:55.787106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.132 [2024-11-20 11:23:55.787115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.132 [2024-11-20 11:23:55.787122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.132 [2024-11-20 11:23:55.787130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.132 [2024-11-20 11:23:55.787137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.132 [2024-11-20 11:23:55.787144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182a6c0 is same with the state(6) to be set 00:23:03.132 [2024-11-20 11:23:55.787175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.132 [2024-11-20 11:23:55.787185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.132 [2024-11-20 11:23:55.787193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.132 [2024-11-20 11:23:55.787200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.132 [2024-11-20 11:23:55.787208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.132 [2024-11-20 11:23:55.787217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.132 [2024-11-20 11:23:55.787231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.132 [2024-11-20 11:23:55.787243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.132 [2024-11-20 11:23:55.787253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832180 is same with the state(6) to be set 00:23:03.132 [2024-11-20 11:23:55.787278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.132 [2024-11-20 11:23:55.787290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.132 [2024-11-20 11:23:55.787300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.132 [2024-11-20 11:23:55.787307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.132 [2024-11-20 11:23:55.787316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.132 [2024-11-20 11:23:55.787324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.132 [2024-11-20 11:23:55.787332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.132 [2024-11-20 11:23:55.787339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.132 [2024-11-20 11:23:55.787346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1404810 is same with the state(6) to be set 00:23:03.132 [2024-11-20 11:23:55.787368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.132 [2024-11-20 11:23:55.787377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.132 [2024-11-20 11:23:55.787385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.132 [2024-11-20 11:23:55.787393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.132 [2024-11-20 11:23:55.787401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.132 [2024-11-20 11:23:55.787408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.132 [2024-11-20 11:23:55.787416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.132 [2024-11-20 11:23:55.787424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.132 [2024-11-20 11:23:55.787431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1849180 is same with the state(6) to be set 00:23:03.132 [2024-11-20 11:23:55.787868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.787887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.787902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.787909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.787919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.787927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.787936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.787943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.787957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.787965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.787975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.787982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.787992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.787999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.788009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.788016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.788026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.788033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.788042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.788050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.788059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.788066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.788076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.788083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.788093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.788100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.788109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.788116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.788126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.788133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.788142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.788151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.788167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.788176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.788186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.788194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.788203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.788210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.788220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.788227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.788237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.788244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.788254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.788261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.788270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.788278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.788288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.788295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.788305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.788312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.788321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.788329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.788339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.788347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.788356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.788364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.788373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.788380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.788391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.788399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.788409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.788416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.788425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.788433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.788442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.788450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.788459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.788467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.788476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.788483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.788493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.788501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.788511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.788518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.788527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.788534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.788544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.133 [2024-11-20 11:23:55.788551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.133 [2024-11-20 11:23:55.788561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.134 [2024-11-20 11:23:55.788568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.134 [2024-11-20 11:23:55.788578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.134 [2024-11-20 11:23:55.788585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.134 [2024-11-20 11:23:55.788595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.134 [2024-11-20 11:23:55.788604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.134 [2024-11-20 11:23:55.788613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.134 [2024-11-20 11:23:55.788621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.134 [2024-11-20 11:23:55.788630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.134 [2024-11-20 11:23:55.788637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.134 [2024-11-20 11:23:55.788647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.134 [2024-11-20 11:23:55.788654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.134 [2024-11-20 11:23:55.788664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.134 [2024-11-20 11:23:55.788671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.134 [2024-11-20 11:23:55.788681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.134 [2024-11-20 11:23:55.788689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.134 [2024-11-20 11:23:55.788699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.134 [2024-11-20 11:23:55.788707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.134 [2024-11-20 11:23:55.788716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.134 [2024-11-20 11:23:55.788724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.134 [2024-11-20 11:23:55.788733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.134 [2024-11-20 11:23:55.788740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.134 [2024-11-20 11:23:55.788750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.134 [2024-11-20 11:23:55.788758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.134 [2024-11-20 11:23:55.788767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.134 [2024-11-20 11:23:55.788774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.134 [2024-11-20 11:23:55.788784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.134 [2024-11-20 11:23:55.788793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.134 [2024-11-20 11:23:55.788802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.134 [2024-11-20 11:23:55.788810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.134 [2024-11-20 11:23:55.788821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.134 [2024-11-20 11:23:55.788828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.134 [2024-11-20 11:23:55.788838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.134 [2024-11-20 11:23:55.788845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.134 [2024-11-20 11:23:55.788855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.134 [2024-11-20 11:23:55.788863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.134 [2024-11-20 11:23:55.788872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.134 [2024-11-20 11:23:55.788879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.134 [2024-11-20 11:23:55.788888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.134 [2024-11-20 11:23:55.788896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.134 [2024-11-20 11:23:55.788906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.134 [2024-11-20 11:23:55.788913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.134 [2024-11-20 11:23:55.788922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.134 [2024-11-20 11:23:55.788929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.134 [2024-11-20 11:23:55.788939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.134 [2024-11-20 11:23:55.788946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.134 [2024-11-20 11:23:55.788956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.134 [2024-11-20 11:23:55.788964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.134 [2024-11-20 11:23:55.788973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.134 [2024-11-20 11:23:55.788980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.134 [2024-11-20 11:23:55.789006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:03.134 [2024-11-20 11:23:55.790583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.134 [2024-11-20 11:23:55.790604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.134 [2024-11-20 11:23:55.790610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.134 [2024-11-20 11:23:55.790616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.134 [2024-11-20 11:23:55.790625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.134 [2024-11-20 11:23:55.790631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1320 is same with the state(6) to be set 00:23:03.134 [2024-11-20 11:23:55.791111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.134 [2024-11-20 11:23:55.791125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.134 [2024-11-20 11:23:55.791130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.134 [2024-11-20 11:23:55.791135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.134 [2024-11-20 11:23:55.791140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.134 [2024-11-20 11:23:55.791145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.134 [2024-11-20 11:23:55.791150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.134 [2024-11-20 11:23:55.791155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.134 [2024-11-20 11:23:55.791165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.134 [2024-11-20 11:23:55.791170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.134 [2024-11-20 11:23:55.791175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.134 [2024-11-20 11:23:55.791180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.134 [2024-11-20 11:23:55.791185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.134 [2024-11-20 11:23:55.791190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.134 [2024-11-20 11:23:55.791195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.134 [2024-11-20 11:23:55.791200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.134 [2024-11-20 11:23:55.791204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.134 [2024-11-20 11:23:55.791209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.134 [2024-11-20 11:23:55.791214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.134 [2024-11-20 11:23:55.791219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.134 [2024-11-20 11:23:55.791224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.134 [2024-11-20 11:23:55.791229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.134 [2024-11-20 11:23:55.791233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.134 [2024-11-20 11:23:55.791238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.134 [2024-11-20 11:23:55.791243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.791431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80d4e0 is same with the state(6) to be set 00:23:03.135 [2024-11-20 11:23:55.794376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.135 [2024-11-20 11:23:55.794400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.135 [2024-11-20 11:23:55.794415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.135 [2024-11-20 11:23:55.794424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.135 [2024-11-20 11:23:55.794434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.135 [2024-11-20 11:23:55.794441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.135 [2024-11-20 11:23:55.794451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.135 [2024-11-20 11:23:55.794459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.135 [2024-11-20 11:23:55.794469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.135 [2024-11-20 11:23:55.794476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.135 [2024-11-20 11:23:55.794486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.135 [2024-11-20 11:23:55.794493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.135 [2024-11-20 11:23:55.794503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.135 [2024-11-20 11:23:55.794511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.135 [2024-11-20 11:23:55.794520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.135 [2024-11-20 11:23:55.794528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.135 [2024-11-20 11:23:55.794542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.135 [2024-11-20 11:23:55.794549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.135 [2024-11-20 11:23:55.794559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.135 [2024-11-20 11:23:55.794566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.135 [2024-11-20 11:23:55.794576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.135 [2024-11-20 11:23:55.794583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.135 [2024-11-20 11:23:55.794593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.135 [2024-11-20 11:23:55.794600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.135 [2024-11-20 11:23:55.794609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.135 [2024-11-20 11:23:55.794617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.135 [2024-11-20 11:23:55.794626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.135 [2024-11-20 11:23:55.794633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.135 [2024-11-20 11:23:55.794642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.135 [2024-11-20 11:23:55.794650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.135 [2024-11-20 11:23:55.794660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.135 [2024-11-20 11:23:55.794667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.135 [2024-11-20 11:23:55.794676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.135 [2024-11-20 11:23:55.794683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.794694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.794702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.794712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.794719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.794729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.794736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.794746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.794755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.794766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.794773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.794783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.794790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.794799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.794808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.794818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.794826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.794836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.794844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.794853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.794861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.794871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.794879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.794889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.794897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.794906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.794914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.794924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.794932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.794941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.794949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.794958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.794966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.794978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.794986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.794995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.795003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.795012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.795020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.795030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.795037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.795047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.795055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.795065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.795072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.795082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.795089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.795099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.795106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.795116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.795124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.795133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.795140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.795150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.795163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.795172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.795180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.795189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.795198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.795208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.795215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.795225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.795232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.795242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.795249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.795259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.795266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.795276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.795283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.795293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.795300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.795309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.795317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.795326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.795333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.795343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.795351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.795360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.136 [2024-11-20 11:23:55.795368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.136 [2024-11-20 11:23:55.795377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.137 [2024-11-20 11:23:55.795385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.137 [2024-11-20 11:23:55.795399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.137 [2024-11-20 11:23:55.795406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.137 [2024-11-20 11:23:55.795418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.137 [2024-11-20 11:23:55.795426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.137 [2024-11-20 11:23:55.795436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.137 [2024-11-20 11:23:55.795444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.137 [2024-11-20 11:23:55.795454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.137 [2024-11-20 11:23:55.795462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.137 [2024-11-20 11:23:55.795472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.137 [2024-11-20 11:23:55.795480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.137 [2024-11-20 11:23:55.795489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.137 [2024-11-20 11:23:55.795497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.137 [2024-11-20 11:23:55.795506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.137 [2024-11-20 11:23:55.795514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.137 [2024-11-20 11:23:55.803988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:03.137 [2024-11-20 11:23:55.804041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1406cb0 (9): Bad file descriptor 00:23:03.137 [2024-11-20 11:23:55.804060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x131e610 (9): Bad file descriptor 00:23:03.137 [2024-11-20 11:23:55.804079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182a8a0 (9): Bad file descriptor 00:23:03.137 [2024-11-20 11:23:55.804121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.137 [2024-11-20 11:23:55.804134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.137 [2024-11-20 11:23:55.804144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.137 [2024-11-20 11:23:55.804152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.137 [2024-11-20 11:23:55.804172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.137 [2024-11-20 11:23:55.804180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.137 [2024-11-20 11:23:55.804189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.137 [2024-11-20 11:23:55.804196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.137 [2024-11-20 11:23:55.804204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18399c0 is same with the state(6) to be set 00:23:03.137 [2024-11-20 11:23:55.804230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13fd9f0 (9): Bad file descriptor 00:23:03.137 [2024-11-20 11:23:55.804248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1403420 (9): Bad file descriptor 00:23:03.137 [2024-11-20 11:23:55.804265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182a6c0 (9): Bad file descriptor 00:23:03.137 [2024-11-20 11:23:55.804279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1832180 (9): Bad file descriptor 00:23:03.137 [2024-11-20 11:23:55.804297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1404810 (9): Bad file descriptor 00:23:03.137 [2024-11-20 11:23:55.804313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1849180 (9): Bad file descriptor 00:23:03.137 [2024-11-20 11:23:55.806097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:03.137 [2024-11-20 11:23:55.806627] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:03.137 [2024-11-20 11:23:55.806757] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:03.137 [2024-11-20 11:23:55.807127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.137 [2024-11-20 11:23:55.807147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1406cb0 with addr=10.0.0.2, port=4420 00:23:03.137 [2024-11-20 11:23:55.807156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1406cb0 is same with the state(6) to be set 00:23:03.137 [2024-11-20 11:23:55.807647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.137 [2024-11-20 11:23:55.807688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182a6c0 with addr=10.0.0.2, port=4420 00:23:03.137 [2024-11-20 11:23:55.807699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182a6c0 is same with the state(6) to be set 00:23:03.137 [2024-11-20 11:23:55.807772] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:03.137 [2024-11-20 11:23:55.807856] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:03.137 [2024-11-20 11:23:55.807896] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:03.137 [2024-11-20 11:23:55.808288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.137 [2024-11-20 11:23:55.808306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.137 [2024-11-20 11:23:55.808325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.137 [2024-11-20 11:23:55.808333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.137 [2024-11-20 11:23:55.808343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.137 [2024-11-20 11:23:55.808351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.137 [2024-11-20 11:23:55.808361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.137 [2024-11-20 11:23:55.808369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.137 [2024-11-20 11:23:55.808378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.137 [2024-11-20 11:23:55.808386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.137 [2024-11-20 11:23:55.808395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.137 [2024-11-20 11:23:55.808408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.137 [2024-11-20 11:23:55.808418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.137 [2024-11-20 11:23:55.808426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.137 [2024-11-20 11:23:55.808435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.137 [2024-11-20 11:23:55.808443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.137 [2024-11-20 11:23:55.808453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.137 [2024-11-20 11:23:55.808460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.137 [2024-11-20 11:23:55.808470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.137 [2024-11-20 11:23:55.808477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.137 [2024-11-20 11:23:55.808486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.137 [2024-11-20 11:23:55.808495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.137 [2024-11-20 11:23:55.808504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.137 [2024-11-20 11:23:55.808513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.137 [2024-11-20 11:23:55.808522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.137 [2024-11-20 11:23:55.808530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.137 [2024-11-20 11:23:55.808540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.137 [2024-11-20 11:23:55.808547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.137 [2024-11-20 11:23:55.808557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.137 [2024-11-20 11:23:55.808565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.137 [2024-11-20 11:23:55.808575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.137 [2024-11-20 11:23:55.808583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.137 [2024-11-20 11:23:55.808592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.137 [2024-11-20 11:23:55.808600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.137 [2024-11-20 11:23:55.808610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.137 [2024-11-20 11:23:55.808617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.808632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.808640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.808650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.808660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.808670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.808677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.808687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.808694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.808704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.808712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.808721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.808729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.808739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.808747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.808756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.808765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.808774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.808782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.808792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.808800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.808809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.808817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.808827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.808834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.808844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.808854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.808864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.808872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.808881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.808889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.808899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.808907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.808916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.808924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.808933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.808941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.808952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.808959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.808969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.808977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.808987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.808994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.809004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.809012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.809021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.809030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.809040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.809048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.809057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.809066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.809078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.809087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.809098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.809106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.809116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.809123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.809133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.809141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.809151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.809165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.809176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.809183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.809193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.809201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.809213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.809221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.809231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.809239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.809250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.809258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.809267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.809275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.809285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.809293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.809303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.809313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.809322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.138 [2024-11-20 11:23:55.809330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.138 [2024-11-20 11:23:55.809340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.809348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.139 [2024-11-20 11:23:55.809357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.809364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.139 [2024-11-20 11:23:55.809374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.809383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.139 [2024-11-20 11:23:55.809392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.809400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.139 [2024-11-20 11:23:55.809410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.809418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.139 [2024-11-20 11:23:55.809428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.809436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.139 [2024-11-20 11:23:55.809446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.809453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.139 [2024-11-20 11:23:55.809462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ac70 is same with the state(6) to be set 00:23:03.139 [2024-11-20 11:23:55.809542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1406cb0 (9): Bad file descriptor 00:23:03.139 [2024-11-20 11:23:55.809557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182a6c0 (9): Bad file descriptor 00:23:03.139 [2024-11-20 11:23:55.809632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.809644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.139 [2024-11-20 11:23:55.809656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.809665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.139 [2024-11-20 11:23:55.809675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.809686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.139 [2024-11-20 11:23:55.809696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.809704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.139 [2024-11-20 11:23:55.809713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.809721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.139 [2024-11-20 11:23:55.809731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.809739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.139 [2024-11-20 11:23:55.809749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.809757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.139 [2024-11-20 11:23:55.809767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.809775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.139 [2024-11-20 11:23:55.809784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.809792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.139 [2024-11-20 11:23:55.809801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.809809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.139 [2024-11-20 11:23:55.809818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.809826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.139 [2024-11-20 11:23:55.809836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.809844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.139 [2024-11-20 11:23:55.809854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.809864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.139 [2024-11-20 11:23:55.809873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.809881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.139 [2024-11-20 11:23:55.809890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.809898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.139 [2024-11-20 11:23:55.809908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.809917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.139 [2024-11-20 11:23:55.809927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.809936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.139 [2024-11-20 11:23:55.809946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.809953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.139 [2024-11-20 11:23:55.809963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.809971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.139 [2024-11-20 11:23:55.809982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.809989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.139 [2024-11-20 11:23:55.809999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.810006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.139 [2024-11-20 11:23:55.810016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.810024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.139 [2024-11-20 11:23:55.810033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.810041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.139 [2024-11-20 11:23:55.810050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.810059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.139 [2024-11-20 11:23:55.810068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.810076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.139 [2024-11-20 11:23:55.810086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.810094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.139 [2024-11-20 11:23:55.810103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.810110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.139 [2024-11-20 11:23:55.810121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.810129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.139 [2024-11-20 11:23:55.810140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.810148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.139 [2024-11-20 11:23:55.810164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.139 [2024-11-20 11:23:55.810173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.140 [2024-11-20 11:23:55.810182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.140 [2024-11-20 11:23:55.810190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.140 [2024-11-20 11:23:55.810199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.140 [2024-11-20 11:23:55.810207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.140 [2024-11-20 11:23:55.810217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.140 [2024-11-20 11:23:55.810224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.140 [2024-11-20 11:23:55.810234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.140 [2024-11-20 11:23:55.810241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.140 [2024-11-20 11:23:55.810251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.140 [2024-11-20 11:23:55.810259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.140 [2024-11-20 11:23:55.810268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.140 [2024-11-20 11:23:55.810276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.140 [2024-11-20 11:23:55.810286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.140 [2024-11-20 11:23:55.810293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.140 [2024-11-20 11:23:55.810304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.140 [2024-11-20 11:23:55.810311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.140 [2024-11-20 11:23:55.810321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.140 [2024-11-20 11:23:55.810329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.140 [2024-11-20 11:23:55.810339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.140 [2024-11-20 11:23:55.810346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.140 [2024-11-20 11:23:55.810356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.140 [2024-11-20 11:23:55.810366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.140 [2024-11-20 11:23:55.810375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.140 [2024-11-20 11:23:55.810384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.140 [2024-11-20 11:23:55.810393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.140 [2024-11-20 11:23:55.810401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.140 [2024-11-20 11:23:55.810411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.140 [2024-11-20 11:23:55.810419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.140 [2024-11-20 11:23:55.810428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.140 [2024-11-20 11:23:55.810436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.140 [2024-11-20 11:23:55.810446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.140 [2024-11-20 11:23:55.810454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.140 [2024-11-20 11:23:55.810464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.140 [2024-11-20 11:23:55.810471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.140 [2024-11-20 11:23:55.810481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.140 [2024-11-20 11:23:55.810489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.140 [2024-11-20 11:23:55.810498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.140 [2024-11-20 11:23:55.810506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.140 [2024-11-20 11:23:55.810516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.140 [2024-11-20 11:23:55.810524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.140 [2024-11-20 11:23:55.810534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.140 [2024-11-20 11:23:55.810542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.140 [2024-11-20 11:23:55.810552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.140 [2024-11-20 11:23:55.810560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.140 [2024-11-20 11:23:55.810569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.140 [2024-11-20 11:23:55.810577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.140 [2024-11-20 11:23:55.810589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.140 [2024-11-20 11:23:55.810597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.140 [2024-11-20 11:23:55.810607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.140 [2024-11-20 11:23:55.810614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.140 [2024-11-20 11:23:55.810624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.140 [2024-11-20 11:23:55.810632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.140 [2024-11-20 11:23:55.810643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.140 [2024-11-20 11:23:55.810650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.140 [2024-11-20 11:23:55.810661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.140 [2024-11-20 11:23:55.810669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.140 [2024-11-20 11:23:55.810678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.140 [2024-11-20 11:23:55.810686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.140 [2024-11-20 11:23:55.810696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.140 [2024-11-20 11:23:55.810704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.140 [2024-11-20 11:23:55.810713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.140 [2024-11-20 11:23:55.810720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.140 [2024-11-20 11:23:55.810730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.140 [2024-11-20 11:23:55.810738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.140 [2024-11-20 11:23:55.810747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.140 [2024-11-20 11:23:55.810754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.140 [2024-11-20 11:23:55.810764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.140 [2024-11-20 11:23:55.810772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.140 [2024-11-20 11:23:55.810780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1808b70 is same with the state(6) to be set 00:23:03.140 [2024-11-20 11:23:55.810877] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:03.140 [2024-11-20 11:23:55.810921] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:03.140 [2024-11-20 11:23:55.812145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:03.140 [2024-11-20 11:23:55.812184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:03.141 [2024-11-20 11:23:55.812195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:03.141 [2024-11-20 11:23:55.812206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:03.141 [2024-11-20 11:23:55.812216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:03.141 [2024-11-20 11:23:55.812226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:03.141 [2024-11-20 11:23:55.812234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:03.141 [2024-11-20 11:23:55.812243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:03.141 [2024-11-20 11:23:55.812251] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:03.141 [2024-11-20 11:23:55.812286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.141 [2024-11-20 11:23:55.812297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.141 [2024-11-20 11:23:55.812311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.141 [2024-11-20 11:23:55.812320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.141 [2024-11-20 11:23:55.812332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.141 [2024-11-20 11:23:55.812340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.141 [2024-11-20 11:23:55.812350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.141 [2024-11-20 11:23:55.812358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.141 [2024-11-20 11:23:55.812367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.141 [2024-11-20 11:23:55.812375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.141 [2024-11-20 11:23:55.812385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.141 [2024-11-20 11:23:55.812392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.141 [2024-11-20 11:23:55.812402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.141 [2024-11-20 11:23:55.812410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.141 [2024-11-20 11:23:55.812419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.141 [2024-11-20 11:23:55.812427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.141 [2024-11-20 11:23:55.812436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.141 [2024-11-20 11:23:55.812444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.141 [2024-11-20 11:23:55.812457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.141 [2024-11-20 11:23:55.812465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.141 [2024-11-20 11:23:55.812476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.141 [2024-11-20 11:23:55.812484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.141 [2024-11-20 11:23:55.812494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.141 [2024-11-20 11:23:55.812502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.141 [2024-11-20 11:23:55.812512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.141 [2024-11-20 11:23:55.812519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.141 [2024-11-20 11:23:55.812529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.141 [2024-11-20 11:23:55.812537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.141 [2024-11-20 11:23:55.812547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.141 [2024-11-20 11:23:55.812555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.141 [2024-11-20 11:23:55.812565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.141 [2024-11-20 11:23:55.812573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.141 [2024-11-20 11:23:55.812582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.141 [2024-11-20 11:23:55.812590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.141 [2024-11-20 11:23:55.812600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.141 [2024-11-20 11:23:55.812609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.141 [2024-11-20 11:23:55.812619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.141 [2024-11-20 11:23:55.812627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.141 [2024-11-20 11:23:55.812637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.141 [2024-11-20 11:23:55.812644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.141 [2024-11-20 11:23:55.812654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.141 [2024-11-20 11:23:55.812661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.141 [2024-11-20 11:23:55.812671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.141 [2024-11-20 11:23:55.812680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.141 [2024-11-20 11:23:55.812689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.141 [2024-11-20 11:23:55.812697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.141 [2024-11-20 11:23:55.812707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.141 [2024-11-20 11:23:55.812714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.141 [2024-11-20 11:23:55.812724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.141 [2024-11-20 11:23:55.812731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.141 [2024-11-20 11:23:55.812741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.141 [2024-11-20 11:23:55.812748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.141 [2024-11-20 11:23:55.812757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.141 [2024-11-20 11:23:55.812765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.141 [2024-11-20 11:23:55.812775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.141 [2024-11-20 11:23:55.812783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.141 [2024-11-20 11:23:55.812792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.141 [2024-11-20 11:23:55.812800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.141 [2024-11-20 11:23:55.812810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.141 [2024-11-20 11:23:55.812817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.141 [2024-11-20 11:23:55.812827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.141 [2024-11-20 11:23:55.812834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.141 [2024-11-20 11:23:55.812845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.141 [2024-11-20 11:23:55.812852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.141 [2024-11-20 11:23:55.812862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.141 [2024-11-20 11:23:55.812869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.141 [2024-11-20 11:23:55.812879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.141 [2024-11-20 11:23:55.812887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.141 [2024-11-20 11:23:55.812898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.141 [2024-11-20 11:23:55.812906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.141 [2024-11-20 11:23:55.812915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-11-20 11:23:55.812923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.142 [2024-11-20 11:23:55.812932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-11-20 11:23:55.812940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.142 [2024-11-20 11:23:55.812949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-11-20 11:23:55.812957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.142 [2024-11-20 11:23:55.812966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-11-20 11:23:55.812974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.142 [2024-11-20 11:23:55.812984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-11-20 11:23:55.812991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.142 [2024-11-20 11:23:55.813001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-11-20 11:23:55.813008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.142 [2024-11-20 11:23:55.813018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-11-20 11:23:55.813026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.142 [2024-11-20 11:23:55.813036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-11-20 11:23:55.813044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.142 [2024-11-20 11:23:55.813053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-11-20 11:23:55.813061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.142 [2024-11-20 11:23:55.813071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-11-20 11:23:55.813078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.142 [2024-11-20 11:23:55.813088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-11-20 11:23:55.813096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.142 [2024-11-20 11:23:55.813106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-11-20 11:23:55.813116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.142 [2024-11-20 11:23:55.813125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-11-20 11:23:55.813133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.142 [2024-11-20 11:23:55.813142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-11-20 11:23:55.813151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.142 [2024-11-20 11:23:55.813165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-11-20 11:23:55.813173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.142 [2024-11-20 11:23:55.813183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-11-20 11:23:55.813190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.142 [2024-11-20 11:23:55.813200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-11-20 11:23:55.813207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.142 [2024-11-20 11:23:55.813217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-11-20 11:23:55.813225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.142 [2024-11-20 11:23:55.813235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-11-20 11:23:55.813243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.142 [2024-11-20 11:23:55.813252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-11-20 11:23:55.813260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.142 [2024-11-20 11:23:55.813270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-11-20 11:23:55.813277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.142 [2024-11-20 11:23:55.813287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-11-20 11:23:55.813295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.142 [2024-11-20 11:23:55.813304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-11-20 11:23:55.813312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.142 [2024-11-20 11:23:55.813321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-11-20 11:23:55.813329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.142 [2024-11-20 11:23:55.813340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-11-20 11:23:55.813348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.142 [2024-11-20 11:23:55.813357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-11-20 11:23:55.813365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.142 [2024-11-20 11:23:55.813375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-11-20 11:23:55.813382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.142 [2024-11-20 11:23:55.813392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-11-20 11:23:55.813399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.142 [2024-11-20 11:23:55.813409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-11-20 11:23:55.813417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.142 [2024-11-20 11:23:55.813425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1806450 is same with the state(6) to be set 00:23:03.142 [2024-11-20 11:23:55.814782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:03.142 [2024-11-20 11:23:55.815169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.142 [2024-11-20 11:23:55.815185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1849180 with addr=10.0.0.2, port=4420 00:23:03.142 [2024-11-20 11:23:55.815194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1849180 is same with the state(6) to be set 00:23:03.142 [2024-11-20 11:23:55.815232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18399c0 (9): Bad file descriptor 00:23:03.142 [2024-11-20 11:23:55.815268] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:23:03.142 [2024-11-20 11:23:55.816809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:03.142 [2024-11-20 11:23:55.817139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.142 [2024-11-20 11:23:55.817154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1832180 with addr=10.0.0.2, port=4420 00:23:03.142 [2024-11-20 11:23:55.817286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832180 is same with the state(6) to be set 00:23:03.142 [2024-11-20 11:23:55.817297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1849180 (9): Bad file descriptor 00:23:03.142 [2024-11-20 11:23:55.817345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-11-20 11:23:55.817355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.142 [2024-11-20 11:23:55.817368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-11-20 11:23:55.817376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.142 [2024-11-20 11:23:55.817386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-11-20 11:23:55.817401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.142 [2024-11-20 11:23:55.817411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-11-20 11:23:55.817419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.142 [2024-11-20 11:23:55.817429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.142 [2024-11-20 11:23:55.817437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.142 [2024-11-20 11:23:55.817447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.817454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.817464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.817472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.817482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.817490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.817499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.817507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.817516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.817524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.817534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.817542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.817551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.817559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.817568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.817576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.817587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.817595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.817604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.817612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.817623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.817631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.817641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.817649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.817658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.817665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.817675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.817682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.817694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.817701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.817711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.817718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.817728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.817735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.817744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.817753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.817762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.817770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.817779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.817787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.817796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.817805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.817814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.817822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.817831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.817840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.817850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.817858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.817868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.817875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.817886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.817893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.817903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.817910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.817920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.817928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.817937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.817945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.817955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.817963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.817973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.817981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.817990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.817998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.818008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.818015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.818025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.818033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.818042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.818050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.818061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.818069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.818080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.818088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.818098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.818106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.818116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.143 [2024-11-20 11:23:55.818124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.143 [2024-11-20 11:23:55.818133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.144 [2024-11-20 11:23:55.818141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.144 [2024-11-20 11:23:55.818151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.144 [2024-11-20 11:23:55.818164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.144 [2024-11-20 11:23:55.818174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.144 [2024-11-20 11:23:55.818182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.144 [2024-11-20 11:23:55.818192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.144 [2024-11-20 11:23:55.818201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.144 [2024-11-20 11:23:55.818211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.144 [2024-11-20 11:23:55.818219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.144 [2024-11-20 11:23:55.818228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.144 [2024-11-20 11:23:55.818236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.144 [2024-11-20 11:23:55.818245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.144 [2024-11-20 11:23:55.818253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.144 [2024-11-20 11:23:55.818263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.144 [2024-11-20 11:23:55.818270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.144 [2024-11-20 11:23:55.818280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.144 [2024-11-20 11:23:55.818290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.144 [2024-11-20 11:23:55.818300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.144 [2024-11-20 11:23:55.818308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.144 [2024-11-20 11:23:55.818317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.144 [2024-11-20 11:23:55.818326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.144 [2024-11-20 11:23:55.818335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.144 [2024-11-20 11:23:55.818343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.144 [2024-11-20 11:23:55.818353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.144 [2024-11-20 11:23:55.818361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.144 [2024-11-20 11:23:55.818370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.144 [2024-11-20 11:23:55.818379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.144 [2024-11-20 11:23:55.818388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.144 [2024-11-20 11:23:55.818395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.144 [2024-11-20 11:23:55.818405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.144 [2024-11-20 11:23:55.818413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.144 [2024-11-20 11:23:55.818423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.144 [2024-11-20 11:23:55.818430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.144 [2024-11-20 11:23:55.818440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.144 [2024-11-20 11:23:55.818448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.144 [2024-11-20 11:23:55.818459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.144 [2024-11-20 11:23:55.818467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.144 [2024-11-20 11:23:55.818476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.144 [2024-11-20 11:23:55.818484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.144 [2024-11-20 11:23:55.818492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160c0d0 is same with the state(6) to be set 00:23:03.144 [2024-11-20 11:23:55.819778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.144 [2024-11-20 11:23:55.819796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.144 [2024-11-20 11:23:55.819809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.144 [2024-11-20 11:23:55.819820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.144 [2024-11-20 11:23:55.819832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.144 [2024-11-20 11:23:55.819842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.144 [2024-11-20 11:23:55.819854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.144 [2024-11-20 11:23:55.819861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.144 [2024-11-20 11:23:55.819871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.144 [2024-11-20 11:23:55.819879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.144 [2024-11-20 11:23:55.819889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.144 [2024-11-20 11:23:55.819897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.144 [2024-11-20 11:23:55.819907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.144 [2024-11-20 11:23:55.819915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.144 [2024-11-20 11:23:55.819925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.144 [2024-11-20 11:23:55.819933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.144 [2024-11-20 11:23:55.819944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.144 [2024-11-20 11:23:55.819952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.144 [2024-11-20 11:23:55.819962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.144 [2024-11-20 11:23:55.819970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.144 [2024-11-20 11:23:55.819979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.144 [2024-11-20 11:23:55.819988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.144 [2024-11-20 11:23:55.819998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.144 [2024-11-20 11:23:55.820006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.144 [2024-11-20 11:23:55.820016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.144 [2024-11-20 11:23:55.820023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.144 [2024-11-20 11:23:55.820035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.144 [2024-11-20 11:23:55.820043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.144 [2024-11-20 11:23:55.820053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.144 [2024-11-20 11:23:55.820061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.145 [2024-11-20 11:23:55.820757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.145 [2024-11-20 11:23:55.820765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.820774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.820782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.820792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.820799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.820809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.820817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.820826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.820834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.820843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.820851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.820861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.820868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.820878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.820885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.820895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.820903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.820912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.820920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.820931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.820939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.820947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1807700 is same with the state(6) to be set 00:23:03.146 [2024-11-20 11:23:55.822487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.822504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.822515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.822523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.822533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.822541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.822550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.822558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.822569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.822576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.822586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.822594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.822604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.822612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.822622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.822630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.822640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.822647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.822657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.822665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.822676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.822684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.822694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.822704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.822714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.822722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.822732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.822740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.822749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.822757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.822767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.822775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.822784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.822792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.822802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.822809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.822819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.822827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.822837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.822845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.822855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.822862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.822872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.822880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.822890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.822898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.822907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.822915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.822926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.822934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.822943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.822951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.822960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.822968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.822978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.822986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.146 [2024-11-20 11:23:55.822996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.146 [2024-11-20 11:23:55.823004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.823014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.823021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.823032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.823039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.823050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.823058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.823068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.823075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.823085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.823092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.823102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.823109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.823119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.823127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.823138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.823148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.823162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.823170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.823180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.823188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.823197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.823205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.823214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.823223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.823232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.823240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.823250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.823258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.823268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.823276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.823286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.823294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.823304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.823311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.823321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.823328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.823339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.823347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.823357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.823365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.823377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.823384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.823395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.823402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.823412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.823420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.823430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.823437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.823447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.823454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.823465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.823473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.823483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.823491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.823500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.823508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.823518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.823526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.823536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.823544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.823554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.823562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.823571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.823579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.823588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.823600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.823610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.823617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.823627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.823634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.823642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b700 is same with the state(6) to be set 00:23:03.147 [2024-11-20 11:23:55.824896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.824908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.824920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.824928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.824939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.824947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.147 [2024-11-20 11:23:55.824957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-11-20 11:23:55.824965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.824975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.824983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.824993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.825011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.825029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.825047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.825065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.825087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.825105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.825124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.825142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.825163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.825181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.825199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.825216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.825234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.825251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.825269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.825287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.825306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.825324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.825342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.825360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.825378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.825396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.825415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.825433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.825452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.825470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.825489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.825507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.825525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.825545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.825562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.825580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.825598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.825615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.825633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.148 [2024-11-20 11:23:55.825650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-11-20 11:23:55.825658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.149 [2024-11-20 11:23:55.825667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-11-20 11:23:55.825676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.149 [2024-11-20 11:23:55.825685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-11-20 11:23:55.825693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.149 [2024-11-20 11:23:55.825704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-11-20 11:23:55.825712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.149 [2024-11-20 11:23:55.825722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-11-20 11:23:55.825730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.149 [2024-11-20 11:23:55.825739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-11-20 11:23:55.825747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.149 [2024-11-20 11:23:55.825759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-11-20 11:23:55.825767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.149 [2024-11-20 11:23:55.825776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-11-20 11:23:55.825784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.149 [2024-11-20 11:23:55.825794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-11-20 11:23:55.825803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.149 [2024-11-20 11:23:55.825812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-11-20 11:23:55.825821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.149 [2024-11-20 11:23:55.825831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-11-20 11:23:55.825839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.149 [2024-11-20 11:23:55.825849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-11-20 11:23:55.825857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.149 [2024-11-20 11:23:55.825867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-11-20 11:23:55.825876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.149 [2024-11-20 11:23:55.825884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180cc80 is same with the state(6) to be set 00:23:03.149 [2024-11-20 11:23:55.827118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:03.149 [2024-11-20 11:23:55.827135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:03.149 [2024-11-20 11:23:55.827146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:03.149 [2024-11-20 11:23:55.827460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.149 [2024-11-20 11:23:55.827502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1404810 with addr=10.0.0.2, port=4420 00:23:03.149 [2024-11-20 11:23:55.827514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1404810 is same with the state(6) to be set 00:23:03.149 [2024-11-20 11:23:55.827531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1832180 (9): Bad file descriptor 00:23:03.149 [2024-11-20 11:23:55.827542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:03.149 [2024-11-20 11:23:55.827551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:03.149 [2024-11-20 11:23:55.827560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:03.149 [2024-11-20 11:23:55.827571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:03.149 [2024-11-20 11:23:55.827623] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:23:03.149 [2024-11-20 11:23:55.827653] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:23:03.149 [2024-11-20 11:23:55.827663] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:23:03.149 [2024-11-20 11:23:55.827677] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:23:03.149 [2024-11-20 11:23:55.827687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1404810 (9): Bad file descriptor 00:23:03.149 [2024-11-20 11:23:55.828036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:03.149 [2024-11-20 11:23:55.828054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:03.149 [2024-11-20 11:23:55.828064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:03.149 [2024-11-20 11:23:55.828490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.149 [2024-11-20 11:23:55.828530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fd9f0 with addr=10.0.0.2, port=4420 00:23:03.149 [2024-11-20 11:23:55.828541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fd9f0 is same with the state(6) to be set 00:23:03.149 [2024-11-20 11:23:55.828869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.149 [2024-11-20 11:23:55.828881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1403420 with addr=10.0.0.2, port=4420 00:23:03.149 [2024-11-20 11:23:55.828889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1403420 is same with the state(6) to be set 00:23:03.149 [2024-11-20 11:23:55.829350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.149 [2024-11-20 11:23:55.829390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131e610 with addr=10.0.0.2, port=4420 00:23:03.149 [2024-11-20 11:23:55.829402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e610 is same with the state(6) to be set 00:23:03.149 [2024-11-20 11:23:55.829417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:03.149 [2024-11-20 11:23:55.829426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:03.149 [2024-11-20 11:23:55.829435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:03.149 [2024-11-20 11:23:55.829444] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:03.149 [2024-11-20 11:23:55.830558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-11-20 11:23:55.830574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.149 [2024-11-20 11:23:55.830592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-11-20 11:23:55.830600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.149 [2024-11-20 11:23:55.830610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-11-20 11:23:55.830617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.149 [2024-11-20 11:23:55.830627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-11-20 11:23:55.830640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.149 [2024-11-20 11:23:55.830649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-11-20 11:23:55.830657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.149 [2024-11-20 11:23:55.830666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-11-20 11:23:55.830674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.149 [2024-11-20 11:23:55.830685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-11-20 11:23:55.830692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.149 [2024-11-20 11:23:55.830702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-11-20 11:23:55.830709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.149 [2024-11-20 11:23:55.830719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-11-20 11:23:55.830726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.830736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.830743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.830752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.830760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.830769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.830776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.830786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.830793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.830803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.830810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.830819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.830828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.830838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.830845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.830856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.830863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.830873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.830882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.830893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.830900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.830909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.830917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.830926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.830933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.830943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.830950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.830959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.830967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.830976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.830984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.830994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.831001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.831011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.831019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.831029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.831036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.831045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.831053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.831062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.831072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.831081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.831088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.831098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.831105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.831115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.831123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.831132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.831140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.831149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.831157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.831171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.831179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.831188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.831196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.831207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.831215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.831224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.831231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.831241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.831249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.831259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.831266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.831275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.831282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.831292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.831302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.831311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.831319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.831328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.831335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.831345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.831353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.831362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.831369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.831379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.831386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.831396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-11-20 11:23:55.831404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.150 [2024-11-20 11:23:55.831413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.151 [2024-11-20 11:23:55.831421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.151 [2024-11-20 11:23:55.831430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.151 [2024-11-20 11:23:55.831437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.151 [2024-11-20 11:23:55.831447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.151 [2024-11-20 11:23:55.831455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.151 [2024-11-20 11:23:55.831465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.151 [2024-11-20 11:23:55.831472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.151 [2024-11-20 11:23:55.831482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.151 [2024-11-20 11:23:55.831490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.151 [2024-11-20 11:23:55.831500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.151 [2024-11-20 11:23:55.831508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.151 [2024-11-20 11:23:55.831519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.151 [2024-11-20 11:23:55.831527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.151 [2024-11-20 11:23:55.831536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.151 [2024-11-20 11:23:55.831544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.151 [2024-11-20 11:23:55.831553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.151 [2024-11-20 11:23:55.831561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.151 [2024-11-20 11:23:55.831570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.151 [2024-11-20 11:23:55.831578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.151 [2024-11-20 11:23:55.831587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.151 [2024-11-20 11:23:55.831595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.151 [2024-11-20 11:23:55.831605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.151 [2024-11-20 11:23:55.831613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.151 [2024-11-20 11:23:55.831622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.151 [2024-11-20 11:23:55.831631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.151 [2024-11-20 11:23:55.831641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.151 [2024-11-20 11:23:55.831648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.151 [2024-11-20 11:23:55.831658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.151 [2024-11-20 11:23:55.831665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.151 [2024-11-20 11:23:55.831675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.151 [2024-11-20 11:23:55.831682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.151 [2024-11-20 11:23:55.831691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180e1e0 is same with the state(6) to be set 00:23:03.151 [2024-11-20 11:23:55.833246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:03.412 task offset: 28160 on job bdev=Nvme1n1 fails 00:23:03.412 00:23:03.412 Latency(us) 00:23:03.412 [2024-11-20T10:23:56.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.412 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.412 Job: Nvme1n1 ended in about 1.05 seconds with error 00:23:03.412 Verification LBA range: start 0x0 length 0x400 00:23:03.412 Nvme1n1 : 1.05 182.88 11.43 60.96 0.00 259850.88 5461.33 256901.12 00:23:03.412 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.412 Job: Nvme2n1 ended in about 1.08 seconds with error 00:23:03.412 Verification LBA range: start 0x0 length 0x400 00:23:03.412 Nvme2n1 : 1.08 178.55 11.16 59.52 0.00 261496.53 20971.52 242920.11 00:23:03.412 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.412 Job: Nvme3n1 ended in about 1.07 seconds with error 00:23:03.412 Verification LBA range: start 0x0 length 0x400 00:23:03.412 Nvme3n1 : 1.07 179.08 11.19 59.69 0.00 255908.05 14745.60 251658.24 00:23:03.412 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.412 Job: Nvme4n1 ended in about 1.08 seconds with error 00:23:03.412 Verification LBA range: start 0x0 length 0x400 00:23:03.412 Nvme4n1 : 1.08 178.14 11.13 59.38 0.00 252607.57 34515.63 241172.48 00:23:03.412 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.412 Job: Nvme5n1 ended in about 1.07 seconds with error 00:23:03.412 Verification LBA range: start 0x0 length 0x400 00:23:03.412 Nvme5n1 : 1.07 179.38 11.21 59.79 0.00 245978.35 9120.43 241172.48 00:23:03.412 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.412 Job: Nvme6n1 ended in about 1.06 seconds with error 00:23:03.412 Verification LBA range: start 0x0 length 0x400 00:23:03.412 Nvme6n1 : 1.06 180.88 11.30 60.29 0.00 239047.04 14854.83 276125.01 00:23:03.412 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.412 Job: Nvme7n1 ended in about 1.08 seconds with error 00:23:03.412 Verification LBA range: start 0x0 length 0x400 00:23:03.412 Nvme7n1 : 1.08 180.47 11.28 59.23 0.00 236249.48 18677.76 234181.97 00:23:03.412 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.412 Job: Nvme8n1 ended in about 1.08 seconds with error 00:23:03.412 Verification LBA range: start 0x0 length 0x400 00:23:03.412 Nvme8n1 : 1.08 186.57 11.66 49.88 0.00 234091.73 13981.01 286610.77 00:23:03.412 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.412 Job: Nvme9n1 ended in about 1.09 seconds with error 00:23:03.412 Verification LBA range: start 0x0 length 0x400 00:23:03.412 Nvme9n1 : 1.09 176.38 11.02 58.79 0.00 231557.33 14964.05 253405.87 00:23:03.412 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.412 Job: Nvme10n1 ended in about 1.07 seconds with error 00:23:03.412 Verification LBA range: start 0x0 length 0x400 00:23:03.412 Nvme10n1 : 1.07 188.24 11.76 59.94 0.00 214217.81 17257.81 244667.73 00:23:03.412 [2024-11-20T10:23:56.154Z] =================================================================================================================== 00:23:03.412 [2024-11-20T10:23:56.154Z] Total : 1810.57 113.16 587.47 0.00 242991.42 5461.33 286610.77 00:23:03.412 [2024-11-20 11:23:55.858412] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:03.412 [2024-11-20 11:23:55.858447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:03.412 [2024-11-20 11:23:55.858852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.412 [2024-11-20 11:23:55.858871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182a8a0 with addr=10.0.0.2, port=4420 00:23:03.412 [2024-11-20 11:23:55.858881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182a8a0 is same with the state(6) to be set 00:23:03.412 [2024-11-20 11:23:55.859087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.412 [2024-11-20 11:23:55.859098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x182a6c0 with addr=10.0.0.2, port=4420 00:23:03.412 [2024-11-20 11:23:55.859105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182a6c0 is same with the state(6) to be set 00:23:03.412 [2024-11-20 11:23:55.859402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.412 [2024-11-20 11:23:55.859421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1406cb0 with addr=10.0.0.2, port=4420 00:23:03.412 [2024-11-20 11:23:55.859428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1406cb0 is same with the state(6) to be set 00:23:03.412 [2024-11-20 11:23:55.859441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13fd9f0 (9): Bad file descriptor 00:23:03.412 [2024-11-20 11:23:55.859453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1403420 (9): Bad file descriptor 00:23:03.412 [2024-11-20 11:23:55.859464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x131e610 (9): Bad file descriptor 00:23:03.412 [2024-11-20 11:23:55.859473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:03.412 [2024-11-20 11:23:55.859480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:03.412 [2024-11-20 11:23:55.859488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:03.412 [2024-11-20 11:23:55.859498] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:03.412 [2024-11-20 11:23:55.859950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.412 [2024-11-20 11:23:55.859966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1849180 with addr=10.0.0.2, port=4420 00:23:03.412 [2024-11-20 11:23:55.859974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1849180 is same with the state(6) to be set 00:23:03.412 [2024-11-20 11:23:55.860166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.412 [2024-11-20 11:23:55.860179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18399c0 with addr=10.0.0.2, port=4420 00:23:03.412 [2024-11-20 11:23:55.860187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18399c0 is same with the state(6) to be set 00:23:03.412 [2024-11-20 11:23:55.860196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182a8a0 (9): Bad file descriptor 00:23:03.412 [2024-11-20 11:23:55.860207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182a6c0 (9): Bad file descriptor 00:23:03.412 [2024-11-20 11:23:55.860216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1406cb0 (9): Bad file descriptor 00:23:03.412 [2024-11-20 11:23:55.860225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:03.412 [2024-11-20 11:23:55.860233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:03.412 [2024-11-20 11:23:55.860241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:03.412 [2024-11-20 11:23:55.860248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:03.412 [2024-11-20 11:23:55.860256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:03.412 [2024-11-20 11:23:55.860264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:03.412 [2024-11-20 11:23:55.860271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:03.412 [2024-11-20 11:23:55.860278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:03.412 [2024-11-20 11:23:55.860285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:03.412 [2024-11-20 11:23:55.860292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:03.412 [2024-11-20 11:23:55.860299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:03.412 [2024-11-20 11:23:55.860308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:03.412 [2024-11-20 11:23:55.860365] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:23:03.413 [2024-11-20 11:23:55.860378] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:23:03.413 [2024-11-20 11:23:55.860391] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:23:03.413 [2024-11-20 11:23:55.860736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1849180 (9): Bad file descriptor 00:23:03.413 [2024-11-20 11:23:55.860751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18399c0 (9): Bad file descriptor 00:23:03.413 [2024-11-20 11:23:55.860760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:03.413 [2024-11-20 11:23:55.860767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:03.413 [2024-11-20 11:23:55.860774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:03.413 [2024-11-20 11:23:55.860781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:03.413 [2024-11-20 11:23:55.860789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:03.413 [2024-11-20 11:23:55.860796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:03.413 [2024-11-20 11:23:55.860803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:03.413 [2024-11-20 11:23:55.860809] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:03.413 [2024-11-20 11:23:55.860816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:03.413 [2024-11-20 11:23:55.860823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:03.413 [2024-11-20 11:23:55.860831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:03.413 [2024-11-20 11:23:55.860837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:03.413 [2024-11-20 11:23:55.860874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:03.413 [2024-11-20 11:23:55.860885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:03.413 [2024-11-20 11:23:55.860895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:03.413 [2024-11-20 11:23:55.860904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:03.413 [2024-11-20 11:23:55.860913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:03.413 [2024-11-20 11:23:55.860951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:03.413 [2024-11-20 11:23:55.860959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:03.413 [2024-11-20 11:23:55.860966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:03.413 [2024-11-20 11:23:55.860972] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:03.413 [2024-11-20 11:23:55.860980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:03.413 [2024-11-20 11:23:55.860990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:03.413 [2024-11-20 11:23:55.860997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:03.413 [2024-11-20 11:23:55.861004] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:03.413 [2024-11-20 11:23:55.861195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.413 [2024-11-20 11:23:55.861209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1832180 with addr=10.0.0.2, port=4420 00:23:03.413 [2024-11-20 11:23:55.861217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1832180 is same with the state(6) to be set 00:23:03.413 [2024-11-20 11:23:55.861502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.413 [2024-11-20 11:23:55.861512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1404810 with addr=10.0.0.2, port=4420 00:23:03.413 [2024-11-20 11:23:55.861521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1404810 is same with the state(6) to be set 00:23:03.413 [2024-11-20 11:23:55.861822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.413 [2024-11-20 11:23:55.861833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x131e610 with addr=10.0.0.2, port=4420 00:23:03.413 [2024-11-20 11:23:55.861841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x131e610 is same with the state(6) to be set 00:23:03.413 [2024-11-20 11:23:55.862002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.413 [2024-11-20 11:23:55.862015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1403420 with addr=10.0.0.2, port=4420 00:23:03.413 [2024-11-20 11:23:55.862023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1403420 is same with the state(6) to be set 00:23:03.413 [2024-11-20 11:23:55.862301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.413 [2024-11-20 11:23:55.862312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13fd9f0 with addr=10.0.0.2, port=4420 00:23:03.413 [2024-11-20 11:23:55.862319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fd9f0 is same with the state(6) to be set 00:23:03.413 [2024-11-20 11:23:55.862353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1832180 (9): Bad file descriptor 00:23:03.413 [2024-11-20 11:23:55.862363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1404810 (9): Bad file descriptor 00:23:03.413 [2024-11-20 11:23:55.862372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x131e610 (9): Bad file descriptor 00:23:03.413 [2024-11-20 11:23:55.862382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1403420 (9): Bad file descriptor 00:23:03.413 [2024-11-20 11:23:55.862390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13fd9f0 (9): Bad file descriptor 00:23:03.413 [2024-11-20 11:23:55.862417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:03.413 [2024-11-20 11:23:55.862425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:03.413 [2024-11-20 11:23:55.862432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:03.413 [2024-11-20 11:23:55.862439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:03.413 [2024-11-20 11:23:55.862446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:03.413 [2024-11-20 11:23:55.862453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:03.413 [2024-11-20 11:23:55.862464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:03.413 [2024-11-20 11:23:55.862470] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:03.413 [2024-11-20 11:23:55.862477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:03.413 [2024-11-20 11:23:55.862483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:03.413 [2024-11-20 11:23:55.862490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:03.413 [2024-11-20 11:23:55.862497] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:03.413 [2024-11-20 11:23:55.862504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:03.413 [2024-11-20 11:23:55.862510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:03.413 [2024-11-20 11:23:55.862517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:03.413 [2024-11-20 11:23:55.862523] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:03.413 [2024-11-20 11:23:55.862530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:03.413 [2024-11-20 11:23:55.862536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:03.413 [2024-11-20 11:23:55.862543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:03.413 [2024-11-20 11:23:55.862549] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:03.413 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:23:04.358 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2802225 00:23:04.358 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:23:04.358 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2802225 00:23:04.358 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:04.358 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:04.358 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:23:04.358 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:04.358 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2802225 00:23:04.358 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:23:04.358 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:04.358 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:23:04.358 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:23:04.358 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:23:04.358 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:04.358 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:23:04.358 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:04.358 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:04.358 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:04.358 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:04.359 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:04.359 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:23:04.359 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:04.359 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:23:04.359 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:04.359 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:04.359 rmmod nvme_tcp 00:23:04.359 rmmod nvme_fabrics 00:23:04.619 rmmod nvme_keyring 00:23:04.619 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:04.619 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:23:04.619 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:23:04.619 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2801919 ']' 00:23:04.619 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2801919 00:23:04.619 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2801919 ']' 00:23:04.619 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2801919 00:23:04.619 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2801919) - No such process 00:23:04.619 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2801919 is not found' 00:23:04.619 Process with pid 2801919 is not found 00:23:04.619 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:04.619 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:04.619 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:04.619 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:23:04.619 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:23:04.619 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:04.619 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:23:04.619 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:04.619 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:04.619 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.619 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:04.619 11:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.531 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:06.531 00:23:06.531 real 0m8.239s 00:23:06.531 user 0m21.260s 00:23:06.531 sys 0m1.314s 00:23:06.531 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:06.531 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:06.531 ************************************ 00:23:06.531 END TEST nvmf_shutdown_tc3 00:23:06.531 ************************************ 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:06.792 ************************************ 00:23:06.792 START TEST nvmf_shutdown_tc4 00:23:06.792 ************************************ 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:06.792 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:06.792 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:06.792 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.792 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:06.793 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.793 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:06.793 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:06.793 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:06.793 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:06.793 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.793 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:06.793 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:06.793 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.793 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:06.793 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:06.793 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:06.793 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:06.793 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:06.793 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:06.793 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:06.793 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:06.793 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:06.793 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:06.793 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:06.793 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:06.793 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:06.793 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:06.793 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:06.793 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:06.793 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:06.793 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:06.793 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:06.793 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:06.793 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:06.793 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:06.793 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:06.793 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:07.053 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:07.053 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:07.053 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:07.053 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:07.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:07.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.712 ms 00:23:07.053 00:23:07.053 --- 10.0.0.2 ping statistics --- 00:23:07.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.053 rtt min/avg/max/mdev = 0.712/0.712/0.712/0.000 ms 00:23:07.053 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:07.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:07.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:23:07.053 00:23:07.053 --- 10.0.0.1 ping statistics --- 00:23:07.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.053 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:23:07.053 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:07.053 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:23:07.053 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:07.053 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:07.053 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:07.053 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:07.053 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:07.053 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:07.053 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:07.053 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:07.053 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:07.053 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:07.053 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:07.053 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2803686 00:23:07.053 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2803686 00:23:07.053 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:07.053 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2803686 ']' 00:23:07.053 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.053 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:07.053 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.053 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:07.053 11:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:07.053 [2024-11-20 11:23:59.748138] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:23:07.053 [2024-11-20 11:23:59.748207] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.314 [2024-11-20 11:23:59.844563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:07.314 [2024-11-20 11:23:59.878421] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.314 [2024-11-20 11:23:59.878453] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.314 [2024-11-20 11:23:59.878459] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:07.314 [2024-11-20 11:23:59.878464] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:07.314 [2024-11-20 11:23:59.878469] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.314 [2024-11-20 11:23:59.880024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.314 [2024-11-20 11:23:59.880194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:07.314 [2024-11-20 11:23:59.880345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.314 [2024-11-20 11:23:59.880347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:07.885 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.885 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:23:07.885 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:07.885 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:07.885 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:07.885 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:07.885 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:07.885 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.885 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:07.885 [2024-11-20 11:24:00.598719] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.885 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.885 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:07.885 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:07.885 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:07.885 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:07.885 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:07.885 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:07.885 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:07.885 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:07.885 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:08.145 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.146 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:08.146 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.146 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:08.146 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.146 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:08.146 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.146 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:08.146 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.146 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:08.146 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.146 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:08.146 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.146 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:08.146 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:08.146 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:08.146 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:08.146 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.146 11:24:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:08.146 Malloc1 00:23:08.146 [2024-11-20 11:24:00.721674] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:08.146 Malloc2 00:23:08.146 Malloc3 00:23:08.146 Malloc4 00:23:08.146 Malloc5 00:23:08.407 Malloc6 00:23:08.407 Malloc7 00:23:08.407 Malloc8 00:23:08.407 Malloc9 00:23:08.407 Malloc10 00:23:08.407 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.407 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:08.407 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:08.407 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:08.407 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2804132 00:23:08.407 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:23:08.407 11:24:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:23:08.667 [2024-11-20 11:24:01.206737] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:13.964 11:24:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:13.964 11:24:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2803686 00:23:13.964 11:24:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2803686 ']' 00:23:13.964 11:24:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2803686 00:23:13.964 11:24:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:23:13.964 11:24:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:13.964 11:24:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2803686 00:23:13.964 11:24:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:13.965 11:24:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:13.965 11:24:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2803686' 00:23:13.965 killing process with pid 2803686 00:23:13.965 11:24:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2803686 00:23:13.965 11:24:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2803686 00:23:13.965 [2024-11-20 11:24:06.203258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dee90 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.203307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dee90 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.203316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dee90 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.203323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dee90 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.203330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dee90 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.203737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df380 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.203780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df380 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.203786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df380 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.204622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e00e0 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.204644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e00e0 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.204652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e00e0 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.204659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e00e0 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.204667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e00e0 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.204674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e00e0 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.205032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e05d0 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.205051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e05d0 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.205056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e05d0 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.205062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e05d0 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.205068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e05d0 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.205073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e05d0 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.206198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e4030 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.206214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e4030 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.206220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e4030 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.206225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e4030 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.206230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e4030 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.206235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e4030 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.206239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e4030 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.206244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e4030 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.206249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e4030 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.206254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e4030 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.206654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0f90 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.206679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0f90 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.206685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0f90 is same with the state(6) to be set 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 starting I/O failed: -6 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 starting I/O failed: -6 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 [2024-11-20 11:24:06.206909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1480 is same with the state(6) to be set 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 starting I/O failed: -6 00:23:13.965 [2024-11-20 11:24:06.206929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1480 is same with the state(6) to be set 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 [2024-11-20 11:24:06.206938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1480 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.206945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1480 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.206953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1480 is same with the state(6) to be set 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 [2024-11-20 11:24:06.206959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1480 is same with the state(6) to be set 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 starting I/O failed: -6 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 starting I/O failed: -6 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 starting I/O failed: -6 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 [2024-11-20 11:24:06.207186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:13.965 [2024-11-20 11:24:06.207302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1970 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.207313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1970 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.207318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1970 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.207324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1970 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.207329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1970 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.207334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1970 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.207339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1970 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.207344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e1970 is same with the state(6) to be set 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 starting I/O failed: -6 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 starting I/O failed: -6 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 starting I/O failed: -6 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 starting I/O failed: -6 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 starting I/O failed: -6 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 starting I/O failed: -6 00:23:13.965 [2024-11-20 11:24:06.207540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0ac0 is same with tWrite completed with error (sct=0, sc=8) 00:23:13.965 he state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.207570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0ac0 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.207576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0ac0 is same with the state(6) to be set 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 [2024-11-20 11:24:06.207581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0ac0 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.207587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0ac0 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.207591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0ac0 is same with the state(6) to be set 00:23:13.965 Write completed with error (sct=0, sc=8) 00:23:13.965 [2024-11-20 11:24:06.207596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0ac0 is same with the state(6) to be set 00:23:13.965 [2024-11-20 11:24:06.207601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e0ac0 is same with the state(6) to be set 00:23:13.965 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 [2024-11-20 11:24:06.208084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.966 Write completed with error (sct=0, sc=8) 00:23:13.966 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 [2024-11-20 11:24:06.210351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:13.967 NVMe io qpair process completion error 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 [2024-11-20 11:24:06.211484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:13.967 starting I/O failed: -6 00:23:13.967 starting I/O failed: -6 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 [2024-11-20 11:24:06.212030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e2330 is same with the state(6) to be set 00:23:13.967 starting I/O failed: -6 00:23:13.967 [2024-11-20 11:24:06.212048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e2330 is same with the state(6) to be set 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 [2024-11-20 11:24:06.212054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e2330 is same with the state(6) to be set 00:23:13.967 [2024-11-20 11:24:06.212060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e2330 is same with the state(6) to be set 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 [2024-11-20 11:24:06.212454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 Write completed with error (sct=0, sc=8) 00:23:13.967 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 [2024-11-20 11:24:06.213338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 [2024-11-20 11:24:06.214884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:13.968 NVMe io qpair process completion error 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 [2024-11-20 11:24:06.216427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 Write completed with error (sct=0, sc=8) 00:23:13.968 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 [2024-11-20 11:24:06.217272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 [2024-11-20 11:24:06.218228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.969 Write completed with error (sct=0, sc=8) 00:23:13.969 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 [2024-11-20 11:24:06.220341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:13.970 NVMe io qpair process completion error 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 [2024-11-20 11:24:06.221355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 [2024-11-20 11:24:06.222169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 starting I/O failed: -6 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.970 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 [2024-11-20 11:24:06.223091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 [2024-11-20 11:24:06.225243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:13.971 NVMe io qpair process completion error 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 starting I/O failed: -6 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.971 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 [2024-11-20 11:24:06.226371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 [2024-11-20 11:24:06.227194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 [2024-11-20 11:24:06.228125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.972 Write completed with error (sct=0, sc=8) 00:23:13.972 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 [2024-11-20 11:24:06.229765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:13.973 NVMe io qpair process completion error 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 [2024-11-20 11:24:06.230921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 [2024-11-20 11:24:06.231837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.973 Write completed with error (sct=0, sc=8) 00:23:13.973 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 [2024-11-20 11:24:06.232747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 [2024-11-20 11:24:06.236145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:13.974 NVMe io qpair process completion error 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 Write completed with error (sct=0, sc=8) 00:23:13.974 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 [2024-11-20 11:24:06.237282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:13.975 starting I/O failed: -6 00:23:13.975 starting I/O failed: -6 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 [2024-11-20 11:24:06.238237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 [2024-11-20 11:24:06.239143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.975 Write completed with error (sct=0, sc=8) 00:23:13.975 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 [2024-11-20 11:24:06.240598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:13.976 NVMe io qpair process completion error 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 [2024-11-20 11:24:06.241615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 [2024-11-20 11:24:06.242431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.976 Write completed with error (sct=0, sc=8) 00:23:13.976 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 [2024-11-20 11:24:06.243368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 [2024-11-20 11:24:06.245451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:13.977 NVMe io qpair process completion error 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.977 starting I/O failed: -6 00:23:13.977 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 [2024-11-20 11:24:06.246752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 [2024-11-20 11:24:06.247577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 [2024-11-20 11:24:06.248536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.978 Write completed with error (sct=0, sc=8) 00:23:13.978 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 [2024-11-20 11:24:06.251061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:13.979 NVMe io qpair process completion error 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 [2024-11-20 11:24:06.252144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 Write completed with error (sct=0, sc=8) 00:23:13.979 starting I/O failed: -6 00:23:13.980 [2024-11-20 11:24:06.252986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 [2024-11-20 11:24:06.253919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 Write completed with error (sct=0, sc=8) 00:23:13.980 starting I/O failed: -6 00:23:13.980 [2024-11-20 11:24:06.255753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:13.980 NVMe io qpair process completion error 00:23:13.980 Initializing NVMe Controllers 00:23:13.980 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:23:13.980 Controller IO queue size 128, less than required. 00:23:13.980 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:13.980 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:23:13.980 Controller IO queue size 128, less than required. 00:23:13.981 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:13.981 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:23:13.981 Controller IO queue size 128, less than required. 00:23:13.981 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:13.981 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:23:13.981 Controller IO queue size 128, less than required. 00:23:13.981 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:13.981 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:23:13.981 Controller IO queue size 128, less than required. 00:23:13.981 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:13.981 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:23:13.981 Controller IO queue size 128, less than required. 00:23:13.981 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:13.981 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:13.981 Controller IO queue size 128, less than required. 00:23:13.981 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:13.981 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:23:13.981 Controller IO queue size 128, less than required. 00:23:13.981 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:13.981 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:23:13.981 Controller IO queue size 128, less than required. 00:23:13.981 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:13.981 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:23:13.981 Controller IO queue size 128, less than required. 00:23:13.981 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:13.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:23:13.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:23:13.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:23:13.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:23:13.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:23:13.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:23:13.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:13.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:23:13.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:23:13.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:23:13.981 Initialization complete. Launching workers. 00:23:13.981 ======================================================== 00:23:13.981 Latency(us) 00:23:13.981 Device Information : IOPS MiB/s Average min max 00:23:13.981 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1885.07 81.00 67919.35 676.96 130004.34 00:23:13.981 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1878.16 80.70 68185.61 831.71 127031.69 00:23:13.981 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1879.83 80.77 68156.99 702.55 126846.82 00:23:13.981 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1879.62 80.77 68195.07 785.98 135717.30 00:23:13.981 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1887.79 81.12 67220.44 536.53 118953.76 00:23:13.981 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1882.56 80.89 67424.39 729.62 122900.66 00:23:13.981 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1871.25 80.41 67857.59 719.80 124614.15 00:23:13.981 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1853.87 79.66 68520.86 833.90 122736.13 00:23:13.981 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1854.71 79.69 68517.29 699.80 122686.40 00:23:13.981 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1888.42 81.14 67316.70 733.56 126539.03 00:23:13.981 ======================================================== 00:23:13.981 Total : 18761.29 806.15 67929.18 536.53 135717.30 00:23:13.981 00:23:13.981 [2024-11-20 11:24:06.260698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1526900 is same with the state(6) to be set 00:23:13.981 [2024-11-20 11:24:06.260742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1526ae0 is same with the state(6) to be set 00:23:13.981 [2024-11-20 11:24:06.260772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1524890 is same with the state(6) to be set 00:23:13.981 [2024-11-20 11:24:06.260802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1525a70 is same with the state(6) to be set 00:23:13.981 [2024-11-20 11:24:06.260830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1524560 is same with the state(6) to be set 00:23:13.981 [2024-11-20 11:24:06.260862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1524ef0 is same with the state(6) to be set 00:23:13.981 [2024-11-20 11:24:06.260894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1526720 is same with the state(6) to be set 00:23:13.981 [2024-11-20 11:24:06.260923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1525740 is same with the state(6) to be set 00:23:13.981 [2024-11-20 11:24:06.260953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1524bc0 is same with the state(6) to be set 00:23:13.981 [2024-11-20 11:24:06.260982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1525410 is same with the state(6) to be set 00:23:13.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:13.981 11:24:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:23:14.927 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2804132 00:23:14.927 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:23:14.927 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2804132 00:23:14.927 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:14.927 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:14.927 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:23:14.927 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:14.927 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2804132 00:23:14.927 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:23:14.927 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:14.927 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:14.927 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:14.927 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:23:14.927 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:14.927 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:14.927 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:14.927 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:14.927 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:14.927 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:23:14.927 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:14.927 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:23:14.927 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:14.927 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:14.927 rmmod nvme_tcp 00:23:14.927 rmmod nvme_fabrics 00:23:14.927 rmmod nvme_keyring 00:23:14.927 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:14.927 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:23:14.927 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:23:14.927 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2803686 ']' 00:23:14.927 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2803686 00:23:14.927 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2803686 ']' 00:23:14.927 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2803686 00:23:14.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2803686) - No such process 00:23:14.927 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2803686 is not found' 00:23:14.927 Process with pid 2803686 is not found 00:23:14.928 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:14.928 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:14.928 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:14.928 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:23:14.928 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:23:14.928 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:14.928 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:23:14.928 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:14.928 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:14.928 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.928 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.928 11:24:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.474 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:17.474 00:23:17.474 real 0m10.298s 00:23:17.474 user 0m28.144s 00:23:17.474 sys 0m3.926s 00:23:17.474 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:17.474 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:17.474 ************************************ 00:23:17.474 END TEST nvmf_shutdown_tc4 00:23:17.474 ************************************ 00:23:17.474 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:17.474 00:23:17.474 real 0m43.428s 00:23:17.474 user 1m45.663s 00:23:17.474 sys 0m13.821s 00:23:17.474 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:17.474 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:17.474 ************************************ 00:23:17.474 END TEST nvmf_shutdown 00:23:17.474 ************************************ 00:23:17.474 11:24:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:17.474 11:24:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:17.474 11:24:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:17.474 11:24:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:17.474 ************************************ 00:23:17.474 START TEST nvmf_nsid 00:23:17.474 ************************************ 00:23:17.474 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:17.474 * Looking for test storage... 00:23:17.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:17.474 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:17.474 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:23:17.474 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:17.474 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:17.474 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:17.474 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:17.474 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:17.474 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:23:17.474 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:23:17.474 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:23:17.474 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:23:17.474 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:23:17.474 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:17.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.475 --rc genhtml_branch_coverage=1 00:23:17.475 --rc genhtml_function_coverage=1 00:23:17.475 --rc genhtml_legend=1 00:23:17.475 --rc geninfo_all_blocks=1 00:23:17.475 --rc geninfo_unexecuted_blocks=1 00:23:17.475 00:23:17.475 ' 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:17.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.475 --rc genhtml_branch_coverage=1 00:23:17.475 --rc genhtml_function_coverage=1 00:23:17.475 --rc genhtml_legend=1 00:23:17.475 --rc geninfo_all_blocks=1 00:23:17.475 --rc geninfo_unexecuted_blocks=1 00:23:17.475 00:23:17.475 ' 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:17.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.475 --rc genhtml_branch_coverage=1 00:23:17.475 --rc genhtml_function_coverage=1 00:23:17.475 --rc genhtml_legend=1 00:23:17.475 --rc geninfo_all_blocks=1 00:23:17.475 --rc geninfo_unexecuted_blocks=1 00:23:17.475 00:23:17.475 ' 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:17.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.475 --rc genhtml_branch_coverage=1 00:23:17.475 --rc genhtml_function_coverage=1 00:23:17.475 --rc genhtml_legend=1 00:23:17.475 --rc geninfo_all_blocks=1 00:23:17.475 --rc geninfo_unexecuted_blocks=1 00:23:17.475 00:23:17.475 ' 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:17.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:23:17.475 11:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:25.616 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:25.616 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:23:25.616 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:25.616 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:25.616 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:25.616 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:25.616 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:25.617 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:25.617 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:25.617 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:25.617 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:25.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:25.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:23:25.617 00:23:25.617 --- 10.0.0.2 ping statistics --- 00:23:25.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.617 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:25.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:25.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:23:25.617 00:23:25.617 --- 10.0.0.1 ping statistics --- 00:23:25.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.617 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:25.617 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:23:25.618 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:25.618 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:25.618 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:25.618 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2809984 00:23:25.618 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2809984 00:23:25.618 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:23:25.618 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2809984 ']' 00:23:25.618 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.618 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:25.618 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.618 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:25.618 11:24:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:25.618 [2024-11-20 11:24:17.577901] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:23:25.618 [2024-11-20 11:24:17.577965] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.618 [2024-11-20 11:24:17.678375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.618 [2024-11-20 11:24:17.731215] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.618 [2024-11-20 11:24:17.731273] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.618 [2024-11-20 11:24:17.731283] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.618 [2024-11-20 11:24:17.731290] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.618 [2024-11-20 11:24:17.731297] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.618 [2024-11-20 11:24:17.732054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2810250 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=a24c3077-f488-43c3-9a35-42120540c07f 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=387f70ae-2e06-43f4-9b3f-95fa28384d4a 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=4069b51e-caf3-41e5-adbd-f8e25b0de4a2 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:25.878 null0 00:23:25.878 null1 00:23:25.878 null2 00:23:25.878 [2024-11-20 11:24:18.493400] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:23:25.878 [2024-11-20 11:24:18.493466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2810250 ] 00:23:25.878 [2024-11-20 11:24:18.494617] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.878 [2024-11-20 11:24:18.518917] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2810250 /var/tmp/tgt2.sock 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2810250 ']' 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:23:25.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:25.878 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:25.878 [2024-11-20 11:24:18.582615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.138 [2024-11-20 11:24:18.636200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.398 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:26.398 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:26.398 11:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:23:26.660 [2024-11-20 11:24:19.192637] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.660 [2024-11-20 11:24:19.208823] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:23:26.660 nvme0n1 nvme0n2 00:23:26.660 nvme1n1 00:23:26.661 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:23:26.661 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:23:26.661 11:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:28.163 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:23:28.163 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:23:28.163 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:23:28.163 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:23:28.163 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:23:28.163 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:23:28.163 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:23:28.163 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:28.163 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:28.163 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:28.163 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:23:28.163 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:23:28.163 11:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:23:29.107 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:29.107 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:29.107 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:29.107 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:29.107 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:29.107 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid a24c3077-f488-43c3-9a35-42120540c07f 00:23:29.107 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:29.107 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:23:29.107 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:23:29.107 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:23:29.107 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:29.107 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=a24c3077f48843c39a3542120540c07f 00:23:29.107 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo A24C3077F48843C39A3542120540C07F 00:23:29.107 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ A24C3077F48843C39A3542120540C07F == \A\2\4\C\3\0\7\7\F\4\8\8\4\3\C\3\9\A\3\5\4\2\1\2\0\5\4\0\C\0\7\F ]] 00:23:29.107 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:23:29.107 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:29.107 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:29.107 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:23:29.107 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:29.107 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:23:29.107 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:29.107 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 387f70ae-2e06-43f4-9b3f-95fa28384d4a 00:23:29.107 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:29.107 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:23:29.107 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:23:29.107 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:23:29.107 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:29.367 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=387f70ae2e0643f49b3f95fa28384d4a 00:23:29.367 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 387F70AE2E0643F49B3F95FA28384D4A 00:23:29.367 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 387F70AE2E0643F49B3F95FA28384D4A == \3\8\7\F\7\0\A\E\2\E\0\6\4\3\F\4\9\B\3\F\9\5\F\A\2\8\3\8\4\D\4\A ]] 00:23:29.367 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:23:29.367 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:29.367 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:29.367 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:23:29.367 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:29.367 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:23:29.367 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:29.367 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 4069b51e-caf3-41e5-adbd-f8e25b0de4a2 00:23:29.367 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:29.367 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:23:29.367 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:23:29.367 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:23:29.367 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:29.367 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=4069b51ecaf341e5adbdf8e25b0de4a2 00:23:29.367 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 4069B51ECAF341E5ADBDF8E25B0DE4A2 00:23:29.367 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 4069B51ECAF341E5ADBDF8E25B0DE4A2 == \4\0\6\9\B\5\1\E\C\A\F\3\4\1\E\5\A\D\B\D\F\8\E\2\5\B\0\D\E\4\A\2 ]] 00:23:29.367 11:24:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:23:29.627 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:23:29.627 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:23:29.627 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2810250 00:23:29.627 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2810250 ']' 00:23:29.627 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2810250 00:23:29.627 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:29.627 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:29.627 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2810250 00:23:29.627 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:29.627 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:29.627 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2810250' 00:23:29.627 killing process with pid 2810250 00:23:29.627 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2810250 00:23:29.627 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2810250 00:23:29.887 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:23:29.887 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:29.888 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:23:29.888 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:29.888 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:23:29.888 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:29.888 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:29.888 rmmod nvme_tcp 00:23:29.888 rmmod nvme_fabrics 00:23:29.888 rmmod nvme_keyring 00:23:29.888 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:29.888 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:23:29.888 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:23:29.888 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2809984 ']' 00:23:29.888 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2809984 00:23:29.888 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2809984 ']' 00:23:29.888 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2809984 00:23:29.888 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:29.888 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:29.888 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2809984 00:23:29.888 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:29.888 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:29.888 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2809984' 00:23:29.888 killing process with pid 2809984 00:23:29.888 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2809984 00:23:29.888 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2809984 00:23:30.148 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:30.148 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:30.148 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:30.148 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:23:30.148 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:23:30.148 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:30.148 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:23:30.148 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:30.148 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:30.148 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.148 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.148 11:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.058 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:32.058 00:23:32.058 real 0m14.970s 00:23:32.058 user 0m11.338s 00:23:32.058 sys 0m6.964s 00:23:32.058 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:32.058 11:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:32.058 ************************************ 00:23:32.058 END TEST nvmf_nsid 00:23:32.058 ************************************ 00:23:32.058 11:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:32.058 00:23:32.058 real 13m5.544s 00:23:32.058 user 27m21.904s 00:23:32.058 sys 3m57.060s 00:23:32.058 11:24:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:32.058 11:24:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:32.058 ************************************ 00:23:32.059 END TEST nvmf_target_extra 00:23:32.059 ************************************ 00:23:32.319 11:24:24 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:32.319 11:24:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:32.319 11:24:24 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:32.319 11:24:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:32.319 ************************************ 00:23:32.319 START TEST nvmf_host 00:23:32.319 ************************************ 00:23:32.319 11:24:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:32.319 * Looking for test storage... 00:23:32.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:32.319 11:24:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:32.320 11:24:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:23:32.320 11:24:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:32.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.320 --rc genhtml_branch_coverage=1 00:23:32.320 --rc genhtml_function_coverage=1 00:23:32.320 --rc genhtml_legend=1 00:23:32.320 --rc geninfo_all_blocks=1 00:23:32.320 --rc geninfo_unexecuted_blocks=1 00:23:32.320 00:23:32.320 ' 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:32.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.320 --rc genhtml_branch_coverage=1 00:23:32.320 --rc genhtml_function_coverage=1 00:23:32.320 --rc genhtml_legend=1 00:23:32.320 --rc geninfo_all_blocks=1 00:23:32.320 --rc geninfo_unexecuted_blocks=1 00:23:32.320 00:23:32.320 ' 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:32.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.320 --rc genhtml_branch_coverage=1 00:23:32.320 --rc genhtml_function_coverage=1 00:23:32.320 --rc genhtml_legend=1 00:23:32.320 --rc geninfo_all_blocks=1 00:23:32.320 --rc geninfo_unexecuted_blocks=1 00:23:32.320 00:23:32.320 ' 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:32.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.320 --rc genhtml_branch_coverage=1 00:23:32.320 --rc genhtml_function_coverage=1 00:23:32.320 --rc genhtml_legend=1 00:23:32.320 --rc geninfo_all_blocks=1 00:23:32.320 --rc geninfo_unexecuted_blocks=1 00:23:32.320 00:23:32.320 ' 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:32.320 11:24:25 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:32.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.583 ************************************ 00:23:32.583 START TEST nvmf_multicontroller 00:23:32.583 ************************************ 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:32.583 * Looking for test storage... 00:23:32.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:32.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.583 --rc genhtml_branch_coverage=1 00:23:32.583 --rc genhtml_function_coverage=1 00:23:32.583 --rc genhtml_legend=1 00:23:32.583 --rc geninfo_all_blocks=1 00:23:32.583 --rc geninfo_unexecuted_blocks=1 00:23:32.583 00:23:32.583 ' 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:32.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.583 --rc genhtml_branch_coverage=1 00:23:32.583 --rc genhtml_function_coverage=1 00:23:32.583 --rc genhtml_legend=1 00:23:32.583 --rc geninfo_all_blocks=1 00:23:32.583 --rc geninfo_unexecuted_blocks=1 00:23:32.583 00:23:32.583 ' 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:32.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.583 --rc genhtml_branch_coverage=1 00:23:32.583 --rc genhtml_function_coverage=1 00:23:32.583 --rc genhtml_legend=1 00:23:32.583 --rc geninfo_all_blocks=1 00:23:32.583 --rc geninfo_unexecuted_blocks=1 00:23:32.583 00:23:32.583 ' 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:32.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.583 --rc genhtml_branch_coverage=1 00:23:32.583 --rc genhtml_function_coverage=1 00:23:32.583 --rc genhtml_legend=1 00:23:32.583 --rc geninfo_all_blocks=1 00:23:32.583 --rc geninfo_unexecuted_blocks=1 00:23:32.583 00:23:32.583 ' 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:32.583 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:32.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:32.845 11:24:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.994 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:40.994 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:40.994 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:40.994 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:40.994 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:40.994 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:40.994 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:40.994 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:40.994 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:40.994 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:40.994 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:40.994 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:40.994 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:40.994 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:40.994 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:40.995 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:40.995 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:40.995 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:40.995 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:40.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:40.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:23:40.995 00:23:40.995 --- 10.0.0.2 ping statistics --- 00:23:40.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.995 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:40.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:40.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:23:40.995 00:23:40.995 --- 10.0.0.1 ping statistics --- 00:23:40.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.995 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2815362 00:23:40.995 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2815362 00:23:40.996 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:40.996 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2815362 ']' 00:23:40.996 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.996 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:40.996 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.996 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:40.996 11:24:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.996 [2024-11-20 11:24:32.923049] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:23:40.996 [2024-11-20 11:24:32.923116] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:40.996 [2024-11-20 11:24:33.023623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:40.996 [2024-11-20 11:24:33.075841] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:40.996 [2024-11-20 11:24:33.075892] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:40.996 [2024-11-20 11:24:33.075900] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:40.996 [2024-11-20 11:24:33.075907] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:40.996 [2024-11-20 11:24:33.075914] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:40.996 [2024-11-20 11:24:33.078029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:40.996 [2024-11-20 11:24:33.078207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:40.996 [2024-11-20 11:24:33.078257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.258 [2024-11-20 11:24:33.796983] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.258 Malloc0 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.258 [2024-11-20 11:24:33.869535] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.258 [2024-11-20 11:24:33.881445] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.258 Malloc1 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2815474 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2815474 /var/tmp/bdevperf.sock 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2815474 ']' 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:41.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:41.258 11:24:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.203 11:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:42.203 11:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:42.203 11:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:42.203 11:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.203 11:24:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.465 NVMe0n1 00:23:42.465 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.465 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:42.465 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:42.465 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.465 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.465 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.465 1 00:23:42.465 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:42.465 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:42.465 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:42.465 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:42.465 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:42.465 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:42.465 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:42.465 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:42.465 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.465 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.465 request: 00:23:42.465 { 00:23:42.465 "name": "NVMe0", 00:23:42.465 "trtype": "tcp", 00:23:42.465 "traddr": "10.0.0.2", 00:23:42.465 "adrfam": "ipv4", 00:23:42.465 "trsvcid": "4420", 00:23:42.465 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.465 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:42.465 "hostaddr": "10.0.0.1", 00:23:42.465 "prchk_reftag": false, 00:23:42.465 "prchk_guard": false, 00:23:42.465 "hdgst": false, 00:23:42.465 "ddgst": false, 00:23:42.465 "allow_unrecognized_csi": false, 00:23:42.465 "method": "bdev_nvme_attach_controller", 00:23:42.465 "req_id": 1 00:23:42.465 } 00:23:42.465 Got JSON-RPC error response 00:23:42.465 response: 00:23:42.465 { 00:23:42.465 "code": -114, 00:23:42.465 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:42.465 } 00:23:42.465 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:42.465 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:42.465 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.466 request: 00:23:42.466 { 00:23:42.466 "name": "NVMe0", 00:23:42.466 "trtype": "tcp", 00:23:42.466 "traddr": "10.0.0.2", 00:23:42.466 "adrfam": "ipv4", 00:23:42.466 "trsvcid": "4420", 00:23:42.466 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:42.466 "hostaddr": "10.0.0.1", 00:23:42.466 "prchk_reftag": false, 00:23:42.466 "prchk_guard": false, 00:23:42.466 "hdgst": false, 00:23:42.466 "ddgst": false, 00:23:42.466 "allow_unrecognized_csi": false, 00:23:42.466 "method": "bdev_nvme_attach_controller", 00:23:42.466 "req_id": 1 00:23:42.466 } 00:23:42.466 Got JSON-RPC error response 00:23:42.466 response: 00:23:42.466 { 00:23:42.466 "code": -114, 00:23:42.466 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:42.466 } 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.466 request: 00:23:42.466 { 00:23:42.466 "name": "NVMe0", 00:23:42.466 "trtype": "tcp", 00:23:42.466 "traddr": "10.0.0.2", 00:23:42.466 "adrfam": "ipv4", 00:23:42.466 "trsvcid": "4420", 00:23:42.466 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.466 "hostaddr": "10.0.0.1", 00:23:42.466 "prchk_reftag": false, 00:23:42.466 "prchk_guard": false, 00:23:42.466 "hdgst": false, 00:23:42.466 "ddgst": false, 00:23:42.466 "multipath": "disable", 00:23:42.466 "allow_unrecognized_csi": false, 00:23:42.466 "method": "bdev_nvme_attach_controller", 00:23:42.466 "req_id": 1 00:23:42.466 } 00:23:42.466 Got JSON-RPC error response 00:23:42.466 response: 00:23:42.466 { 00:23:42.466 "code": -114, 00:23:42.466 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:42.466 } 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.466 request: 00:23:42.466 { 00:23:42.466 "name": "NVMe0", 00:23:42.466 "trtype": "tcp", 00:23:42.466 "traddr": "10.0.0.2", 00:23:42.466 "adrfam": "ipv4", 00:23:42.466 "trsvcid": "4420", 00:23:42.466 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.466 "hostaddr": "10.0.0.1", 00:23:42.466 "prchk_reftag": false, 00:23:42.466 "prchk_guard": false, 00:23:42.466 "hdgst": false, 00:23:42.466 "ddgst": false, 00:23:42.466 "multipath": "failover", 00:23:42.466 "allow_unrecognized_csi": false, 00:23:42.466 "method": "bdev_nvme_attach_controller", 00:23:42.466 "req_id": 1 00:23:42.466 } 00:23:42.466 Got JSON-RPC error response 00:23:42.466 response: 00:23:42.466 { 00:23:42.466 "code": -114, 00:23:42.466 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:42.466 } 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.466 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.728 NVMe0n1 00:23:42.728 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.728 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:42.728 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.728 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.728 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.728 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:42.728 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.728 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.990 00:23:42.990 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.990 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:42.990 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:42.990 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.990 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.990 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.990 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:42.990 11:24:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:44.375 { 00:23:44.375 "results": [ 00:23:44.375 { 00:23:44.375 "job": "NVMe0n1", 00:23:44.375 "core_mask": "0x1", 00:23:44.375 "workload": "write", 00:23:44.375 "status": "finished", 00:23:44.375 "queue_depth": 128, 00:23:44.375 "io_size": 4096, 00:23:44.375 "runtime": 1.006741, 00:23:44.375 "iops": 20577.288498233407, 00:23:44.375 "mibps": 80.38003319622425, 00:23:44.375 "io_failed": 0, 00:23:44.375 "io_timeout": 0, 00:23:44.375 "avg_latency_us": 6206.4078470747245, 00:23:44.375 "min_latency_us": 2102.6133333333332, 00:23:44.375 "max_latency_us": 13325.653333333334 00:23:44.375 } 00:23:44.375 ], 00:23:44.375 "core_count": 1 00:23:44.375 } 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2815474 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2815474 ']' 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2815474 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2815474 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2815474' 00:23:44.375 killing process with pid 2815474 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2815474 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2815474 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:23:44.375 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:44.375 [2024-11-20 11:24:34.014222] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:23:44.375 [2024-11-20 11:24:34.014304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2815474 ] 00:23:44.375 [2024-11-20 11:24:34.108813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.375 [2024-11-20 11:24:34.163142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.375 [2024-11-20 11:24:35.589838] bdev.c:4700:bdev_name_add: *ERROR*: Bdev name 980e0697-ea76-41ca-83f4-1a8e3a6df748 already exists 00:23:44.375 [2024-11-20 11:24:35.589882] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:980e0697-ea76-41ca-83f4-1a8e3a6df748 alias for bdev NVMe1n1 00:23:44.375 [2024-11-20 11:24:35.589893] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:44.375 Running I/O for 1 seconds... 00:23:44.375 20522.00 IOPS, 80.16 MiB/s 00:23:44.375 Latency(us) 00:23:44.375 [2024-11-20T10:24:37.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.375 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:44.375 NVMe0n1 : 1.01 20577.29 80.38 0.00 0.00 6206.41 2102.61 13325.65 00:23:44.375 [2024-11-20T10:24:37.117Z] =================================================================================================================== 00:23:44.375 [2024-11-20T10:24:37.117Z] Total : 20577.29 80.38 0.00 0.00 6206.41 2102.61 13325.65 00:23:44.375 Received shutdown signal, test time was about 1.000000 seconds 00:23:44.375 00:23:44.375 Latency(us) 00:23:44.375 [2024-11-20T10:24:37.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.375 [2024-11-20T10:24:37.117Z] =================================================================================================================== 00:23:44.375 [2024-11-20T10:24:37.117Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:44.375 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:44.375 11:24:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:44.375 rmmod nvme_tcp 00:23:44.375 rmmod nvme_fabrics 00:23:44.375 rmmod nvme_keyring 00:23:44.375 11:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:44.375 11:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:44.375 11:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:44.375 11:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2815362 ']' 00:23:44.375 11:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2815362 00:23:44.375 11:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2815362 ']' 00:23:44.375 11:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2815362 00:23:44.375 11:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:44.375 11:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:44.375 11:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2815362 00:23:44.636 11:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:44.636 11:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:44.636 11:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2815362' 00:23:44.636 killing process with pid 2815362 00:23:44.636 11:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2815362 00:23:44.636 11:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2815362 00:23:44.636 11:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:44.636 11:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:44.636 11:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:44.636 11:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:44.636 11:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:23:44.636 11:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:44.636 11:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:23:44.636 11:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:44.636 11:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:44.636 11:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.636 11:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.636 11:24:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:47.183 00:23:47.183 real 0m14.226s 00:23:47.183 user 0m18.028s 00:23:47.183 sys 0m6.518s 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:47.183 ************************************ 00:23:47.183 END TEST nvmf_multicontroller 00:23:47.183 ************************************ 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.183 ************************************ 00:23:47.183 START TEST nvmf_aer 00:23:47.183 ************************************ 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:47.183 * Looking for test storage... 00:23:47.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:47.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.183 --rc genhtml_branch_coverage=1 00:23:47.183 --rc genhtml_function_coverage=1 00:23:47.183 --rc genhtml_legend=1 00:23:47.183 --rc geninfo_all_blocks=1 00:23:47.183 --rc geninfo_unexecuted_blocks=1 00:23:47.183 00:23:47.183 ' 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:47.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.183 --rc genhtml_branch_coverage=1 00:23:47.183 --rc genhtml_function_coverage=1 00:23:47.183 --rc genhtml_legend=1 00:23:47.183 --rc geninfo_all_blocks=1 00:23:47.183 --rc geninfo_unexecuted_blocks=1 00:23:47.183 00:23:47.183 ' 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:47.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.183 --rc genhtml_branch_coverage=1 00:23:47.183 --rc genhtml_function_coverage=1 00:23:47.183 --rc genhtml_legend=1 00:23:47.183 --rc geninfo_all_blocks=1 00:23:47.183 --rc geninfo_unexecuted_blocks=1 00:23:47.183 00:23:47.183 ' 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:47.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.183 --rc genhtml_branch_coverage=1 00:23:47.183 --rc genhtml_function_coverage=1 00:23:47.183 --rc genhtml_legend=1 00:23:47.183 --rc geninfo_all_blocks=1 00:23:47.183 --rc geninfo_unexecuted_blocks=1 00:23:47.183 00:23:47.183 ' 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:47.183 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:47.184 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:47.184 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:47.184 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:47.184 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:47.184 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:47.184 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:47.184 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.184 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.184 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.184 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:47.184 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.184 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:47.184 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:47.184 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:47.184 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:47.184 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:47.184 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:47.184 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:47.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:47.184 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:47.184 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:47.184 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:47.184 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:47.184 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:47.184 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:47.184 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:47.184 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:47.184 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:47.184 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.184 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:47.184 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.184 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:47.184 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:47.184 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:47.184 11:24:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:55.324 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:55.324 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:55.324 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:55.325 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:55.325 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:55.325 11:24:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:55.325 11:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:55.325 11:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:55.325 11:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:55.325 11:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:55.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:55.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:23:55.325 00:23:55.325 --- 10.0.0.2 ping statistics --- 00:23:55.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.325 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:23:55.325 11:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:55.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:55.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:23:55.325 00:23:55.325 --- 10.0.0.1 ping statistics --- 00:23:55.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.325 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:23:55.325 11:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:55.325 11:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:23:55.325 11:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:55.325 11:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:55.325 11:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:55.325 11:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:55.325 11:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:55.325 11:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:55.325 11:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:55.325 11:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:55.325 11:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:55.325 11:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:55.325 11:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:55.325 11:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2820343 00:23:55.325 11:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2820343 00:23:55.325 11:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:55.325 11:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2820343 ']' 00:23:55.325 11:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.325 11:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:55.325 11:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.325 11:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:55.325 11:24:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:55.325 [2024-11-20 11:24:47.230131] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:23:55.325 [2024-11-20 11:24:47.230213] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:55.325 [2024-11-20 11:24:47.329033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:55.325 [2024-11-20 11:24:47.383534] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:55.325 [2024-11-20 11:24:47.383588] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:55.325 [2024-11-20 11:24:47.383596] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:55.325 [2024-11-20 11:24:47.383604] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:55.325 [2024-11-20 11:24:47.383611] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:55.325 [2024-11-20 11:24:47.386005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.325 [2024-11-20 11:24:47.386183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:55.326 [2024-11-20 11:24:47.386301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.326 [2024-11-20 11:24:47.386301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:55.326 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:55.326 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:23:55.326 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:55.326 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:55.326 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:55.587 [2024-11-20 11:24:48.106399] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:55.587 Malloc0 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:55.587 [2024-11-20 11:24:48.186694] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:55.587 [ 00:23:55.587 { 00:23:55.587 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:55.587 "subtype": "Discovery", 00:23:55.587 "listen_addresses": [], 00:23:55.587 "allow_any_host": true, 00:23:55.587 "hosts": [] 00:23:55.587 }, 00:23:55.587 { 00:23:55.587 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.587 "subtype": "NVMe", 00:23:55.587 "listen_addresses": [ 00:23:55.587 { 00:23:55.587 "trtype": "TCP", 00:23:55.587 "adrfam": "IPv4", 00:23:55.587 "traddr": "10.0.0.2", 00:23:55.587 "trsvcid": "4420" 00:23:55.587 } 00:23:55.587 ], 00:23:55.587 "allow_any_host": true, 00:23:55.587 "hosts": [], 00:23:55.587 "serial_number": "SPDK00000000000001", 00:23:55.587 "model_number": "SPDK bdev Controller", 00:23:55.587 "max_namespaces": 2, 00:23:55.587 "min_cntlid": 1, 00:23:55.587 "max_cntlid": 65519, 00:23:55.587 "namespaces": [ 00:23:55.587 { 00:23:55.587 "nsid": 1, 00:23:55.587 "bdev_name": "Malloc0", 00:23:55.587 "name": "Malloc0", 00:23:55.587 "nguid": "0F3AE8702EE543D69E1D1258DD1F8DB5", 00:23:55.587 "uuid": "0f3ae870-2ee5-43d6-9e1d-1258dd1f8db5" 00:23:55.587 } 00:23:55.587 ] 00:23:55.587 } 00:23:55.587 ] 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2820517 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:23:55.587 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:55.849 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:55.849 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:23:55.849 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:23:55.849 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:55.849 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:55.849 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:55.849 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:23:55.849 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:55.849 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.849 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:55.849 Malloc1 00:23:55.849 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.849 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:55.849 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.849 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:56.111 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.111 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:56.111 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.111 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:56.111 [ 00:23:56.111 { 00:23:56.111 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:56.111 "subtype": "Discovery", 00:23:56.111 "listen_addresses": [], 00:23:56.111 "allow_any_host": true, 00:23:56.111 "hosts": [] 00:23:56.111 }, 00:23:56.111 { 00:23:56.111 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.111 "subtype": "NVMe", 00:23:56.111 "listen_addresses": [ 00:23:56.111 { 00:23:56.111 "trtype": "TCP", 00:23:56.111 "adrfam": "IPv4", 00:23:56.111 "traddr": "10.0.0.2", 00:23:56.111 "trsvcid": "4420" 00:23:56.111 } 00:23:56.111 ], 00:23:56.111 "allow_any_host": true, 00:23:56.111 "hosts": [], 00:23:56.111 "serial_number": "SPDK00000000000001", 00:23:56.111 "model_number": "SPDK bdev Controller", 00:23:56.111 "max_namespaces": 2, 00:23:56.111 "min_cntlid": 1, 00:23:56.111 "max_cntlid": 65519, 00:23:56.111 "namespaces": [ 00:23:56.111 { 00:23:56.111 "nsid": 1, 00:23:56.111 "bdev_name": "Malloc0", 00:23:56.111 "name": "Malloc0", 00:23:56.111 "nguid": "0F3AE8702EE543D69E1D1258DD1F8DB5", 00:23:56.111 "uuid": "0f3ae870-2ee5-43d6-9e1d-1258dd1f8db5" 00:23:56.111 }, 00:23:56.111 { 00:23:56.111 "nsid": 2, 00:23:56.111 "bdev_name": "Malloc1", 00:23:56.111 "name": "Malloc1", 00:23:56.111 "nguid": "E148A8E714454B2F859A4047716D3717", 00:23:56.111 "uuid": "e148a8e7-1445-4b2f-859a-4047716d3717" 00:23:56.111 } 00:23:56.111 ] 00:23:56.111 } 00:23:56.111 ] 00:23:56.111 Asynchronous Event Request test 00:23:56.111 Attaching to 10.0.0.2 00:23:56.111 Attached to 10.0.0.2 00:23:56.112 Registering asynchronous event callbacks... 00:23:56.112 Starting namespace attribute notice tests for all controllers... 00:23:56.112 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:56.112 aer_cb - Changed Namespace 00:23:56.112 Cleaning up... 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2820517 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:56.112 rmmod nvme_tcp 00:23:56.112 rmmod nvme_fabrics 00:23:56.112 rmmod nvme_keyring 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2820343 ']' 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2820343 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2820343 ']' 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2820343 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2820343 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2820343' 00:23:56.112 killing process with pid 2820343 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2820343 00:23:56.112 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2820343 00:23:56.374 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:56.374 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:56.374 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:56.374 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:56.374 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:23:56.374 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:56.374 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:23:56.374 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:56.374 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:56.374 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.374 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:56.374 11:24:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:58.919 00:23:58.919 real 0m11.666s 00:23:58.919 user 0m8.647s 00:23:58.919 sys 0m6.213s 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:58.919 ************************************ 00:23:58.919 END TEST nvmf_aer 00:23:58.919 ************************************ 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.919 ************************************ 00:23:58.919 START TEST nvmf_async_init 00:23:58.919 ************************************ 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:58.919 * Looking for test storage... 00:23:58.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:58.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.919 --rc genhtml_branch_coverage=1 00:23:58.919 --rc genhtml_function_coverage=1 00:23:58.919 --rc genhtml_legend=1 00:23:58.919 --rc geninfo_all_blocks=1 00:23:58.919 --rc geninfo_unexecuted_blocks=1 00:23:58.919 00:23:58.919 ' 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:58.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.919 --rc genhtml_branch_coverage=1 00:23:58.919 --rc genhtml_function_coverage=1 00:23:58.919 --rc genhtml_legend=1 00:23:58.919 --rc geninfo_all_blocks=1 00:23:58.919 --rc geninfo_unexecuted_blocks=1 00:23:58.919 00:23:58.919 ' 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:58.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.919 --rc genhtml_branch_coverage=1 00:23:58.919 --rc genhtml_function_coverage=1 00:23:58.919 --rc genhtml_legend=1 00:23:58.919 --rc geninfo_all_blocks=1 00:23:58.919 --rc geninfo_unexecuted_blocks=1 00:23:58.919 00:23:58.919 ' 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:58.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.919 --rc genhtml_branch_coverage=1 00:23:58.919 --rc genhtml_function_coverage=1 00:23:58.919 --rc genhtml_legend=1 00:23:58.919 --rc geninfo_all_blocks=1 00:23:58.919 --rc geninfo_unexecuted_blocks=1 00:23:58.919 00:23:58.919 ' 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.919 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:58.920 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.920 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:58.920 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:58.920 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:58.920 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:58.920 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:58.920 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:58.920 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:58.920 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:58.920 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:58.920 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:58.920 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:58.920 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:58.920 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:58.920 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:58.920 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:58.920 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:58.920 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:58.920 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=f51de3d5400b41afa7aacb83169fd6be 00:23:58.920 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:58.920 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:58.920 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:58.920 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:58.920 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:58.920 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:58.920 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.920 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:58.920 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.920 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:58.920 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:58.920 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:58.920 11:24:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:07.062 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:07.062 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:07.062 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:07.062 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:07.062 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:07.063 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:07.063 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:07.063 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:07.063 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:07.063 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:07.063 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:07.063 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:07.063 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:07.063 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:07.063 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:07.063 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:07.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:07.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:24:07.063 00:24:07.063 --- 10.0.0.2 ping statistics --- 00:24:07.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.063 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:24:07.063 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:07.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:07.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:24:07.063 00:24:07.063 --- 10.0.0.1 ping statistics --- 00:24:07.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.063 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:24:07.063 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:07.063 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:24:07.063 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:07.063 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:07.063 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:07.063 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:07.063 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:07.063 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:07.063 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:07.063 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:07.063 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:07.063 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:07.063 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.063 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2824842 00:24:07.063 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2824842 00:24:07.063 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:07.063 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2824842 ']' 00:24:07.063 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.063 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:07.063 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.063 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:07.063 11:24:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.063 [2024-11-20 11:24:59.003918] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:24:07.063 [2024-11-20 11:24:59.003986] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:07.063 [2024-11-20 11:24:59.105293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.063 [2024-11-20 11:24:59.156028] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:07.063 [2024-11-20 11:24:59.156081] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:07.063 [2024-11-20 11:24:59.156089] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:07.063 [2024-11-20 11:24:59.156097] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:07.063 [2024-11-20 11:24:59.156103] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:07.063 [2024-11-20 11:24:59.156856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.324 11:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:07.324 11:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:24:07.324 11:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:07.324 11:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:07.324 11:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.324 11:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:07.324 11:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:07.324 11:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.324 11:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.324 [2024-11-20 11:24:59.866344] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:07.324 11:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.324 11:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:07.324 11:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.324 11:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.324 null0 00:24:07.325 11:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.325 11:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:07.325 11:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.325 11:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.325 11:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.325 11:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:07.325 11:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.325 11:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.325 11:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.325 11:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g f51de3d5400b41afa7aacb83169fd6be 00:24:07.325 11:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.325 11:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.325 11:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.325 11:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:07.325 11:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.325 11:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.325 [2024-11-20 11:24:59.926686] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:07.325 11:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.325 11:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:07.325 11:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.325 11:24:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.586 nvme0n1 00:24:07.586 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.586 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:07.586 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.586 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.586 [ 00:24:07.586 { 00:24:07.586 "name": "nvme0n1", 00:24:07.586 "aliases": [ 00:24:07.586 "f51de3d5-400b-41af-a7aa-cb83169fd6be" 00:24:07.586 ], 00:24:07.586 "product_name": "NVMe disk", 00:24:07.586 "block_size": 512, 00:24:07.586 "num_blocks": 2097152, 00:24:07.586 "uuid": "f51de3d5-400b-41af-a7aa-cb83169fd6be", 00:24:07.586 "numa_id": 0, 00:24:07.586 "assigned_rate_limits": { 00:24:07.586 "rw_ios_per_sec": 0, 00:24:07.586 "rw_mbytes_per_sec": 0, 00:24:07.586 "r_mbytes_per_sec": 0, 00:24:07.586 "w_mbytes_per_sec": 0 00:24:07.586 }, 00:24:07.586 "claimed": false, 00:24:07.586 "zoned": false, 00:24:07.586 "supported_io_types": { 00:24:07.586 "read": true, 00:24:07.586 "write": true, 00:24:07.586 "unmap": false, 00:24:07.586 "flush": true, 00:24:07.586 "reset": true, 00:24:07.586 "nvme_admin": true, 00:24:07.586 "nvme_io": true, 00:24:07.586 "nvme_io_md": false, 00:24:07.586 "write_zeroes": true, 00:24:07.586 "zcopy": false, 00:24:07.586 "get_zone_info": false, 00:24:07.586 "zone_management": false, 00:24:07.586 "zone_append": false, 00:24:07.586 "compare": true, 00:24:07.586 "compare_and_write": true, 00:24:07.586 "abort": true, 00:24:07.586 "seek_hole": false, 00:24:07.586 "seek_data": false, 00:24:07.586 "copy": true, 00:24:07.586 "nvme_iov_md": false 00:24:07.586 }, 00:24:07.586 "memory_domains": [ 00:24:07.586 { 00:24:07.586 "dma_device_id": "system", 00:24:07.586 "dma_device_type": 1 00:24:07.586 } 00:24:07.586 ], 00:24:07.586 "driver_specific": { 00:24:07.586 "nvme": [ 00:24:07.586 { 00:24:07.586 "trid": { 00:24:07.586 "trtype": "TCP", 00:24:07.586 "adrfam": "IPv4", 00:24:07.586 "traddr": "10.0.0.2", 00:24:07.586 "trsvcid": "4420", 00:24:07.586 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:07.586 }, 00:24:07.586 "ctrlr_data": { 00:24:07.586 "cntlid": 1, 00:24:07.586 "vendor_id": "0x8086", 00:24:07.586 "model_number": "SPDK bdev Controller", 00:24:07.586 "serial_number": "00000000000000000000", 00:24:07.586 "firmware_revision": "25.01", 00:24:07.586 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:07.587 "oacs": { 00:24:07.587 "security": 0, 00:24:07.587 "format": 0, 00:24:07.587 "firmware": 0, 00:24:07.587 "ns_manage": 0 00:24:07.587 }, 00:24:07.587 "multi_ctrlr": true, 00:24:07.587 "ana_reporting": false 00:24:07.587 }, 00:24:07.587 "vs": { 00:24:07.587 "nvme_version": "1.3" 00:24:07.587 }, 00:24:07.587 "ns_data": { 00:24:07.587 "id": 1, 00:24:07.587 "can_share": true 00:24:07.587 } 00:24:07.587 } 00:24:07.587 ], 00:24:07.587 "mp_policy": "active_passive" 00:24:07.587 } 00:24:07.587 } 00:24:07.587 ] 00:24:07.587 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.587 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:07.587 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.587 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.587 [2024-11-20 11:25:00.203194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:07.587 [2024-11-20 11:25:00.203281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd23ce0 (9): Bad file descriptor 00:24:07.848 [2024-11-20 11:25:00.335270] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:24:07.848 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.848 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:07.848 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.848 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.848 [ 00:24:07.848 { 00:24:07.848 "name": "nvme0n1", 00:24:07.848 "aliases": [ 00:24:07.848 "f51de3d5-400b-41af-a7aa-cb83169fd6be" 00:24:07.848 ], 00:24:07.848 "product_name": "NVMe disk", 00:24:07.848 "block_size": 512, 00:24:07.848 "num_blocks": 2097152, 00:24:07.848 "uuid": "f51de3d5-400b-41af-a7aa-cb83169fd6be", 00:24:07.848 "numa_id": 0, 00:24:07.848 "assigned_rate_limits": { 00:24:07.848 "rw_ios_per_sec": 0, 00:24:07.848 "rw_mbytes_per_sec": 0, 00:24:07.848 "r_mbytes_per_sec": 0, 00:24:07.848 "w_mbytes_per_sec": 0 00:24:07.848 }, 00:24:07.848 "claimed": false, 00:24:07.848 "zoned": false, 00:24:07.848 "supported_io_types": { 00:24:07.848 "read": true, 00:24:07.848 "write": true, 00:24:07.848 "unmap": false, 00:24:07.848 "flush": true, 00:24:07.848 "reset": true, 00:24:07.848 "nvme_admin": true, 00:24:07.849 "nvme_io": true, 00:24:07.849 "nvme_io_md": false, 00:24:07.849 "write_zeroes": true, 00:24:07.849 "zcopy": false, 00:24:07.849 "get_zone_info": false, 00:24:07.849 "zone_management": false, 00:24:07.849 "zone_append": false, 00:24:07.849 "compare": true, 00:24:07.849 "compare_and_write": true, 00:24:07.849 "abort": true, 00:24:07.849 "seek_hole": false, 00:24:07.849 "seek_data": false, 00:24:07.849 "copy": true, 00:24:07.849 "nvme_iov_md": false 00:24:07.849 }, 00:24:07.849 "memory_domains": [ 00:24:07.849 { 00:24:07.849 "dma_device_id": "system", 00:24:07.849 "dma_device_type": 1 00:24:07.849 } 00:24:07.849 ], 00:24:07.849 "driver_specific": { 00:24:07.849 "nvme": [ 00:24:07.849 { 00:24:07.849 "trid": { 00:24:07.849 "trtype": "TCP", 00:24:07.849 "adrfam": "IPv4", 00:24:07.849 "traddr": "10.0.0.2", 00:24:07.849 "trsvcid": "4420", 00:24:07.849 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:07.849 }, 00:24:07.849 "ctrlr_data": { 00:24:07.849 "cntlid": 2, 00:24:07.849 "vendor_id": "0x8086", 00:24:07.849 "model_number": "SPDK bdev Controller", 00:24:07.849 "serial_number": "00000000000000000000", 00:24:07.849 "firmware_revision": "25.01", 00:24:07.849 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:07.849 "oacs": { 00:24:07.849 "security": 0, 00:24:07.849 "format": 0, 00:24:07.849 "firmware": 0, 00:24:07.849 "ns_manage": 0 00:24:07.849 }, 00:24:07.849 "multi_ctrlr": true, 00:24:07.849 "ana_reporting": false 00:24:07.849 }, 00:24:07.849 "vs": { 00:24:07.849 "nvme_version": "1.3" 00:24:07.849 }, 00:24:07.849 "ns_data": { 00:24:07.849 "id": 1, 00:24:07.849 "can_share": true 00:24:07.849 } 00:24:07.849 } 00:24:07.849 ], 00:24:07.849 "mp_policy": "active_passive" 00:24:07.849 } 00:24:07.849 } 00:24:07.849 ] 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.6qmaKcgbOq 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.6qmaKcgbOq 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.6qmaKcgbOq 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.849 [2024-11-20 11:25:00.423854] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:07.849 [2024-11-20 11:25:00.424020] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.849 [2024-11-20 11:25:00.447933] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:07.849 nvme0n1 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.849 [ 00:24:07.849 { 00:24:07.849 "name": "nvme0n1", 00:24:07.849 "aliases": [ 00:24:07.849 "f51de3d5-400b-41af-a7aa-cb83169fd6be" 00:24:07.849 ], 00:24:07.849 "product_name": "NVMe disk", 00:24:07.849 "block_size": 512, 00:24:07.849 "num_blocks": 2097152, 00:24:07.849 "uuid": "f51de3d5-400b-41af-a7aa-cb83169fd6be", 00:24:07.849 "numa_id": 0, 00:24:07.849 "assigned_rate_limits": { 00:24:07.849 "rw_ios_per_sec": 0, 00:24:07.849 "rw_mbytes_per_sec": 0, 00:24:07.849 "r_mbytes_per_sec": 0, 00:24:07.849 "w_mbytes_per_sec": 0 00:24:07.849 }, 00:24:07.849 "claimed": false, 00:24:07.849 "zoned": false, 00:24:07.849 "supported_io_types": { 00:24:07.849 "read": true, 00:24:07.849 "write": true, 00:24:07.849 "unmap": false, 00:24:07.849 "flush": true, 00:24:07.849 "reset": true, 00:24:07.849 "nvme_admin": true, 00:24:07.849 "nvme_io": true, 00:24:07.849 "nvme_io_md": false, 00:24:07.849 "write_zeroes": true, 00:24:07.849 "zcopy": false, 00:24:07.849 "get_zone_info": false, 00:24:07.849 "zone_management": false, 00:24:07.849 "zone_append": false, 00:24:07.849 "compare": true, 00:24:07.849 "compare_and_write": true, 00:24:07.849 "abort": true, 00:24:07.849 "seek_hole": false, 00:24:07.849 "seek_data": false, 00:24:07.849 "copy": true, 00:24:07.849 "nvme_iov_md": false 00:24:07.849 }, 00:24:07.849 "memory_domains": [ 00:24:07.849 { 00:24:07.849 "dma_device_id": "system", 00:24:07.849 "dma_device_type": 1 00:24:07.849 } 00:24:07.849 ], 00:24:07.849 "driver_specific": { 00:24:07.849 "nvme": [ 00:24:07.849 { 00:24:07.849 "trid": { 00:24:07.849 "trtype": "TCP", 00:24:07.849 "adrfam": "IPv4", 00:24:07.849 "traddr": "10.0.0.2", 00:24:07.849 "trsvcid": "4421", 00:24:07.849 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:07.849 }, 00:24:07.849 "ctrlr_data": { 00:24:07.849 "cntlid": 3, 00:24:07.849 "vendor_id": "0x8086", 00:24:07.849 "model_number": "SPDK bdev Controller", 00:24:07.849 "serial_number": "00000000000000000000", 00:24:07.849 "firmware_revision": "25.01", 00:24:07.849 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:07.849 "oacs": { 00:24:07.849 "security": 0, 00:24:07.849 "format": 0, 00:24:07.849 "firmware": 0, 00:24:07.849 "ns_manage": 0 00:24:07.849 }, 00:24:07.849 "multi_ctrlr": true, 00:24:07.849 "ana_reporting": false 00:24:07.849 }, 00:24:07.849 "vs": { 00:24:07.849 "nvme_version": "1.3" 00:24:07.849 }, 00:24:07.849 "ns_data": { 00:24:07.849 "id": 1, 00:24:07.849 "can_share": true 00:24:07.849 } 00:24:07.849 } 00:24:07.849 ], 00:24:07.849 "mp_policy": "active_passive" 00:24:07.849 } 00:24:07.849 } 00:24:07.849 ] 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.6qmaKcgbOq 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:07.849 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:07.849 rmmod nvme_tcp 00:24:08.111 rmmod nvme_fabrics 00:24:08.111 rmmod nvme_keyring 00:24:08.111 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:08.111 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:24:08.111 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:24:08.111 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2824842 ']' 00:24:08.111 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2824842 00:24:08.111 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2824842 ']' 00:24:08.111 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2824842 00:24:08.111 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:24:08.111 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:08.111 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2824842 00:24:08.111 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:08.111 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:08.111 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2824842' 00:24:08.111 killing process with pid 2824842 00:24:08.111 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2824842 00:24:08.111 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2824842 00:24:08.111 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:08.111 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:08.111 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:08.111 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:24:08.373 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:24:08.373 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:08.373 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:24:08.373 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:08.373 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:08.373 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.373 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:08.373 11:25:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.288 11:25:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:10.288 00:24:10.288 real 0m11.779s 00:24:10.288 user 0m4.300s 00:24:10.288 sys 0m6.054s 00:24:10.288 11:25:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:10.288 11:25:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:10.288 ************************************ 00:24:10.288 END TEST nvmf_async_init 00:24:10.288 ************************************ 00:24:10.288 11:25:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:10.288 11:25:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:10.288 11:25:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:10.288 11:25:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.288 ************************************ 00:24:10.288 START TEST dma 00:24:10.288 ************************************ 00:24:10.288 11:25:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:10.550 * Looking for test storage... 00:24:10.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:10.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.550 --rc genhtml_branch_coverage=1 00:24:10.550 --rc genhtml_function_coverage=1 00:24:10.550 --rc genhtml_legend=1 00:24:10.550 --rc geninfo_all_blocks=1 00:24:10.550 --rc geninfo_unexecuted_blocks=1 00:24:10.550 00:24:10.550 ' 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:10.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.550 --rc genhtml_branch_coverage=1 00:24:10.550 --rc genhtml_function_coverage=1 00:24:10.550 --rc genhtml_legend=1 00:24:10.550 --rc geninfo_all_blocks=1 00:24:10.550 --rc geninfo_unexecuted_blocks=1 00:24:10.550 00:24:10.550 ' 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:10.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.550 --rc genhtml_branch_coverage=1 00:24:10.550 --rc genhtml_function_coverage=1 00:24:10.550 --rc genhtml_legend=1 00:24:10.550 --rc geninfo_all_blocks=1 00:24:10.550 --rc geninfo_unexecuted_blocks=1 00:24:10.550 00:24:10.550 ' 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:10.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.550 --rc genhtml_branch_coverage=1 00:24:10.550 --rc genhtml_function_coverage=1 00:24:10.550 --rc genhtml_legend=1 00:24:10.550 --rc geninfo_all_blocks=1 00:24:10.550 --rc geninfo_unexecuted_blocks=1 00:24:10.550 00:24:10.550 ' 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:10.550 11:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:10.551 11:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:10.551 11:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:24:10.551 11:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:10.551 11:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:10.551 11:25:03 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:10.551 11:25:03 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.551 11:25:03 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.551 11:25:03 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.551 11:25:03 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:10.551 11:25:03 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.551 11:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:24:10.551 11:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:10.551 11:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:10.551 11:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:10.551 11:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:10.551 11:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:10.551 11:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:10.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:10.551 11:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:10.551 11:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:10.551 11:25:03 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:10.551 11:25:03 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:10.551 11:25:03 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:10.551 00:24:10.551 real 0m0.241s 00:24:10.551 user 0m0.133s 00:24:10.551 sys 0m0.124s 00:24:10.551 11:25:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:10.551 11:25:03 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:10.551 ************************************ 00:24:10.551 END TEST dma 00:24:10.551 ************************************ 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.813 ************************************ 00:24:10.813 START TEST nvmf_identify 00:24:10.813 ************************************ 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:10.813 * Looking for test storage... 00:24:10.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:10.813 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:24:11.075 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:11.075 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:11.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.075 --rc genhtml_branch_coverage=1 00:24:11.075 --rc genhtml_function_coverage=1 00:24:11.075 --rc genhtml_legend=1 00:24:11.075 --rc geninfo_all_blocks=1 00:24:11.075 --rc geninfo_unexecuted_blocks=1 00:24:11.076 00:24:11.076 ' 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:11.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.076 --rc genhtml_branch_coverage=1 00:24:11.076 --rc genhtml_function_coverage=1 00:24:11.076 --rc genhtml_legend=1 00:24:11.076 --rc geninfo_all_blocks=1 00:24:11.076 --rc geninfo_unexecuted_blocks=1 00:24:11.076 00:24:11.076 ' 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:11.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.076 --rc genhtml_branch_coverage=1 00:24:11.076 --rc genhtml_function_coverage=1 00:24:11.076 --rc genhtml_legend=1 00:24:11.076 --rc geninfo_all_blocks=1 00:24:11.076 --rc geninfo_unexecuted_blocks=1 00:24:11.076 00:24:11.076 ' 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:11.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.076 --rc genhtml_branch_coverage=1 00:24:11.076 --rc genhtml_function_coverage=1 00:24:11.076 --rc genhtml_legend=1 00:24:11.076 --rc geninfo_all_blocks=1 00:24:11.076 --rc geninfo_unexecuted_blocks=1 00:24:11.076 00:24:11.076 ' 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:11.076 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:24:11.076 11:25:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:19.221 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:19.221 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:24:19.221 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:19.221 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:19.221 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:19.221 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:19.221 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:19.221 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:24:19.221 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:19.221 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:24:19.221 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:24:19.221 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:24:19.221 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:24:19.221 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:24:19.221 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:24:19.221 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:19.221 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:19.221 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:19.222 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:19.222 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:19.222 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:19.222 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:19.222 11:25:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:19.222 11:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:19.222 11:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:19.222 11:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:19.222 11:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:19.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:19.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:24:19.222 00:24:19.222 --- 10.0.0.2 ping statistics --- 00:24:19.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.222 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:24:19.222 11:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:19.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:19.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:24:19.222 00:24:19.222 --- 10.0.0.1 ping statistics --- 00:24:19.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.222 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:24:19.222 11:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:19.222 11:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:24:19.222 11:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:19.222 11:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:19.222 11:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:19.222 11:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:19.222 11:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:19.222 11:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:19.222 11:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:19.222 11:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:19.222 11:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:19.222 11:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:19.222 11:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2829577 00:24:19.222 11:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:19.222 11:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:19.222 11:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2829577 00:24:19.222 11:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2829577 ']' 00:24:19.222 11:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.222 11:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:19.222 11:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.222 11:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:19.222 11:25:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:19.222 [2024-11-20 11:25:11.199223] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:24:19.223 [2024-11-20 11:25:11.199286] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:19.223 [2024-11-20 11:25:11.299235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:19.223 [2024-11-20 11:25:11.353124] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:19.223 [2024-11-20 11:25:11.353188] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:19.223 [2024-11-20 11:25:11.353197] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:19.223 [2024-11-20 11:25:11.353205] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:19.223 [2024-11-20 11:25:11.353211] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:19.223 [2024-11-20 11:25:11.355290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.223 [2024-11-20 11:25:11.355630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:19.223 [2024-11-20 11:25:11.355766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:19.223 [2024-11-20 11:25:11.355768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.484 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:19.484 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:24:19.484 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:19.484 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.484 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:19.484 [2024-11-20 11:25:12.039697] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:19.484 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.484 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:19.484 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:19.484 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:19.484 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:19.484 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.484 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:19.484 Malloc0 00:24:19.484 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.484 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:19.484 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.484 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:19.484 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.484 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:19.484 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.484 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:19.484 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.484 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:19.484 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.484 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:19.484 [2024-11-20 11:25:12.159137] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:19.484 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.484 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:19.484 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.484 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:19.484 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.484 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:19.484 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.484 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:19.484 [ 00:24:19.484 { 00:24:19.484 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:19.484 "subtype": "Discovery", 00:24:19.484 "listen_addresses": [ 00:24:19.484 { 00:24:19.484 "trtype": "TCP", 00:24:19.484 "adrfam": "IPv4", 00:24:19.484 "traddr": "10.0.0.2", 00:24:19.484 "trsvcid": "4420" 00:24:19.484 } 00:24:19.484 ], 00:24:19.484 "allow_any_host": true, 00:24:19.484 "hosts": [] 00:24:19.484 }, 00:24:19.484 { 00:24:19.484 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:19.484 "subtype": "NVMe", 00:24:19.484 "listen_addresses": [ 00:24:19.484 { 00:24:19.484 "trtype": "TCP", 00:24:19.484 "adrfam": "IPv4", 00:24:19.484 "traddr": "10.0.0.2", 00:24:19.484 "trsvcid": "4420" 00:24:19.484 } 00:24:19.484 ], 00:24:19.484 "allow_any_host": true, 00:24:19.484 "hosts": [], 00:24:19.484 "serial_number": "SPDK00000000000001", 00:24:19.484 "model_number": "SPDK bdev Controller", 00:24:19.484 "max_namespaces": 32, 00:24:19.484 "min_cntlid": 1, 00:24:19.484 "max_cntlid": 65519, 00:24:19.484 "namespaces": [ 00:24:19.484 { 00:24:19.484 "nsid": 1, 00:24:19.484 "bdev_name": "Malloc0", 00:24:19.484 "name": "Malloc0", 00:24:19.484 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:19.484 "eui64": "ABCDEF0123456789", 00:24:19.484 "uuid": "8cc1e108-5f69-40f6-b68e-0d7585b19682" 00:24:19.484 } 00:24:19.484 ] 00:24:19.484 } 00:24:19.484 ] 00:24:19.484 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.485 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:19.485 [2024-11-20 11:25:12.222777] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:24:19.485 [2024-11-20 11:25:12.222828] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829656 ] 00:24:19.750 [2024-11-20 11:25:12.284954] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:24:19.750 [2024-11-20 11:25:12.285023] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:19.750 [2024-11-20 11:25:12.285029] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:19.750 [2024-11-20 11:25:12.285045] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:19.750 [2024-11-20 11:25:12.285058] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:19.750 [2024-11-20 11:25:12.288657] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:24:19.750 [2024-11-20 11:25:12.288708] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1a5e690 0 00:24:19.750 [2024-11-20 11:25:12.296177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:19.751 [2024-11-20 11:25:12.296195] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:19.751 [2024-11-20 11:25:12.296200] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:19.751 [2024-11-20 11:25:12.296204] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:19.751 [2024-11-20 11:25:12.296257] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.751 [2024-11-20 11:25:12.296264] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.751 [2024-11-20 11:25:12.296268] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5e690) 00:24:19.751 [2024-11-20 11:25:12.296285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:19.751 [2024-11-20 11:25:12.296310] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac0100, cid 0, qid 0 00:24:19.751 [2024-11-20 11:25:12.303174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.751 [2024-11-20 11:25:12.303185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.751 [2024-11-20 11:25:12.303189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.751 [2024-11-20 11:25:12.303194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac0100) on tqpair=0x1a5e690 00:24:19.751 [2024-11-20 11:25:12.303206] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:19.751 [2024-11-20 11:25:12.303215] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:24:19.751 [2024-11-20 11:25:12.303221] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:24:19.751 [2024-11-20 11:25:12.303238] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.751 [2024-11-20 11:25:12.303243] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.751 [2024-11-20 11:25:12.303246] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5e690) 00:24:19.751 [2024-11-20 11:25:12.303256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.751 [2024-11-20 11:25:12.303273] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac0100, cid 0, qid 0 00:24:19.751 [2024-11-20 11:25:12.303477] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.751 [2024-11-20 11:25:12.303483] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.751 [2024-11-20 11:25:12.303487] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.751 [2024-11-20 11:25:12.303491] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac0100) on tqpair=0x1a5e690 00:24:19.751 [2024-11-20 11:25:12.303497] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:24:19.751 [2024-11-20 11:25:12.303505] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:24:19.751 [2024-11-20 11:25:12.303512] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.751 [2024-11-20 11:25:12.303516] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.751 [2024-11-20 11:25:12.303520] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5e690) 00:24:19.751 [2024-11-20 11:25:12.303527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.751 [2024-11-20 11:25:12.303539] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac0100, cid 0, qid 0 00:24:19.751 [2024-11-20 11:25:12.303708] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.751 [2024-11-20 11:25:12.303716] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.751 [2024-11-20 11:25:12.303719] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.751 [2024-11-20 11:25:12.303723] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac0100) on tqpair=0x1a5e690 00:24:19.751 [2024-11-20 11:25:12.303729] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:24:19.751 [2024-11-20 11:25:12.303737] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:19.751 [2024-11-20 11:25:12.303748] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.751 [2024-11-20 11:25:12.303752] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.751 [2024-11-20 11:25:12.303756] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5e690) 00:24:19.751 [2024-11-20 11:25:12.303763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.751 [2024-11-20 11:25:12.303774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac0100, cid 0, qid 0 00:24:19.751 [2024-11-20 11:25:12.303940] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.751 [2024-11-20 11:25:12.303946] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.751 [2024-11-20 11:25:12.303949] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.751 [2024-11-20 11:25:12.303953] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac0100) on tqpair=0x1a5e690 00:24:19.751 [2024-11-20 11:25:12.303959] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:19.751 [2024-11-20 11:25:12.303969] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.751 [2024-11-20 11:25:12.303973] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.751 [2024-11-20 11:25:12.303977] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5e690) 00:24:19.751 [2024-11-20 11:25:12.303983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.751 [2024-11-20 11:25:12.303994] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac0100, cid 0, qid 0 00:24:19.751 [2024-11-20 11:25:12.304174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.751 [2024-11-20 11:25:12.304181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.751 [2024-11-20 11:25:12.304184] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.751 [2024-11-20 11:25:12.304188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac0100) on tqpair=0x1a5e690 00:24:19.751 [2024-11-20 11:25:12.304193] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:19.751 [2024-11-20 11:25:12.304198] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:19.751 [2024-11-20 11:25:12.304206] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:19.751 [2024-11-20 11:25:12.304318] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:24:19.751 [2024-11-20 11:25:12.304324] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:19.751 [2024-11-20 11:25:12.304333] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.751 [2024-11-20 11:25:12.304337] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.751 [2024-11-20 11:25:12.304340] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5e690) 00:24:19.751 [2024-11-20 11:25:12.304347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.751 [2024-11-20 11:25:12.304359] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac0100, cid 0, qid 0 00:24:19.751 [2024-11-20 11:25:12.304575] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.751 [2024-11-20 11:25:12.304582] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.751 [2024-11-20 11:25:12.304585] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.751 [2024-11-20 11:25:12.304589] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac0100) on tqpair=0x1a5e690 00:24:19.751 [2024-11-20 11:25:12.304597] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:19.751 [2024-11-20 11:25:12.304607] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.751 [2024-11-20 11:25:12.304611] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.751 [2024-11-20 11:25:12.304615] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5e690) 00:24:19.751 [2024-11-20 11:25:12.304621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.751 [2024-11-20 11:25:12.304632] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac0100, cid 0, qid 0 00:24:19.751 [2024-11-20 11:25:12.304844] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.751 [2024-11-20 11:25:12.304850] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.751 [2024-11-20 11:25:12.304853] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.751 [2024-11-20 11:25:12.304857] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac0100) on tqpair=0x1a5e690 00:24:19.751 [2024-11-20 11:25:12.304862] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:19.751 [2024-11-20 11:25:12.304866] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:19.751 [2024-11-20 11:25:12.304874] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:24:19.751 [2024-11-20 11:25:12.304883] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:19.751 [2024-11-20 11:25:12.304894] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.751 [2024-11-20 11:25:12.304898] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5e690) 00:24:19.751 [2024-11-20 11:25:12.304905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.751 [2024-11-20 11:25:12.304915] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac0100, cid 0, qid 0 00:24:19.751 [2024-11-20 11:25:12.305170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:19.751 [2024-11-20 11:25:12.305178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:19.751 [2024-11-20 11:25:12.305182] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:19.751 [2024-11-20 11:25:12.305186] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a5e690): datao=0, datal=4096, cccid=0 00:24:19.751 [2024-11-20 11:25:12.305191] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ac0100) on tqpair(0x1a5e690): expected_datao=0, payload_size=4096 00:24:19.751 [2024-11-20 11:25:12.305196] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.751 [2024-11-20 11:25:12.305205] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:19.751 [2024-11-20 11:25:12.305209] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:19.751 [2024-11-20 11:25:12.305326] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.752 [2024-11-20 11:25:12.305332] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.752 [2024-11-20 11:25:12.305336] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.752 [2024-11-20 11:25:12.305340] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac0100) on tqpair=0x1a5e690 00:24:19.752 [2024-11-20 11:25:12.305349] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:24:19.752 [2024-11-20 11:25:12.305354] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:24:19.752 [2024-11-20 11:25:12.305359] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:24:19.752 [2024-11-20 11:25:12.305370] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:24:19.752 [2024-11-20 11:25:12.305375] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:24:19.752 [2024-11-20 11:25:12.305380] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:24:19.752 [2024-11-20 11:25:12.305392] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:19.752 [2024-11-20 11:25:12.305399] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.752 [2024-11-20 11:25:12.305403] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.752 [2024-11-20 11:25:12.305407] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5e690) 00:24:19.752 [2024-11-20 11:25:12.305414] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:19.752 [2024-11-20 11:25:12.305426] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac0100, cid 0, qid 0 00:24:19.752 [2024-11-20 11:25:12.305635] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.752 [2024-11-20 11:25:12.305641] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.752 [2024-11-20 11:25:12.305645] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.752 [2024-11-20 11:25:12.305649] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac0100) on tqpair=0x1a5e690 00:24:19.752 [2024-11-20 11:25:12.305656] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.752 [2024-11-20 11:25:12.305660] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.752 [2024-11-20 11:25:12.305664] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5e690) 00:24:19.752 [2024-11-20 11:25:12.305670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.752 [2024-11-20 11:25:12.305677] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.752 [2024-11-20 11:25:12.305680] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.752 [2024-11-20 11:25:12.305684] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1a5e690) 00:24:19.752 [2024-11-20 11:25:12.305690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.752 [2024-11-20 11:25:12.305696] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.752 [2024-11-20 11:25:12.305700] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.752 [2024-11-20 11:25:12.305703] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1a5e690) 00:24:19.752 [2024-11-20 11:25:12.305709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.752 [2024-11-20 11:25:12.305716] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.752 [2024-11-20 11:25:12.305719] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.752 [2024-11-20 11:25:12.305723] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5e690) 00:24:19.752 [2024-11-20 11:25:12.305729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.752 [2024-11-20 11:25:12.305734] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:19.752 [2024-11-20 11:25:12.305742] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:19.752 [2024-11-20 11:25:12.305749] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.752 [2024-11-20 11:25:12.305755] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a5e690) 00:24:19.752 [2024-11-20 11:25:12.305762] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.752 [2024-11-20 11:25:12.305775] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac0100, cid 0, qid 0 00:24:19.752 [2024-11-20 11:25:12.305780] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac0280, cid 1, qid 0 00:24:19.752 [2024-11-20 11:25:12.305785] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac0400, cid 2, qid 0 00:24:19.752 [2024-11-20 11:25:12.305790] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac0580, cid 3, qid 0 00:24:19.752 [2024-11-20 11:25:12.305795] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac0700, cid 4, qid 0 00:24:19.752 [2024-11-20 11:25:12.306028] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.752 [2024-11-20 11:25:12.306034] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.752 [2024-11-20 11:25:12.306037] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.752 [2024-11-20 11:25:12.306041] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac0700) on tqpair=0x1a5e690 00:24:19.752 [2024-11-20 11:25:12.306049] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:24:19.752 [2024-11-20 11:25:12.306055] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:24:19.752 [2024-11-20 11:25:12.306066] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.752 [2024-11-20 11:25:12.306070] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a5e690) 00:24:19.752 [2024-11-20 11:25:12.306076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.752 [2024-11-20 11:25:12.306087] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac0700, cid 4, qid 0 00:24:19.752 [2024-11-20 11:25:12.306281] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:19.752 [2024-11-20 11:25:12.306289] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:19.752 [2024-11-20 11:25:12.306293] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:19.752 [2024-11-20 11:25:12.306297] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a5e690): datao=0, datal=4096, cccid=4 00:24:19.752 [2024-11-20 11:25:12.306301] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ac0700) on tqpair(0x1a5e690): expected_datao=0, payload_size=4096 00:24:19.752 [2024-11-20 11:25:12.306306] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.752 [2024-11-20 11:25:12.306313] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:19.752 [2024-11-20 11:25:12.306316] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:19.752 [2024-11-20 11:25:12.351172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.752 [2024-11-20 11:25:12.351185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.752 [2024-11-20 11:25:12.351189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.752 [2024-11-20 11:25:12.351193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac0700) on tqpair=0x1a5e690 00:24:19.752 [2024-11-20 11:25:12.351211] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:24:19.752 [2024-11-20 11:25:12.351243] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.752 [2024-11-20 11:25:12.351247] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a5e690) 00:24:19.752 [2024-11-20 11:25:12.351257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.752 [2024-11-20 11:25:12.351265] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.752 [2024-11-20 11:25:12.351272] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.752 [2024-11-20 11:25:12.351276] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a5e690) 00:24:19.752 [2024-11-20 11:25:12.351282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.752 [2024-11-20 11:25:12.351302] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac0700, cid 4, qid 0 00:24:19.752 [2024-11-20 11:25:12.351307] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac0880, cid 5, qid 0 00:24:19.752 [2024-11-20 11:25:12.351523] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:19.752 [2024-11-20 11:25:12.351530] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:19.752 [2024-11-20 11:25:12.351534] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:19.752 [2024-11-20 11:25:12.351537] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a5e690): datao=0, datal=1024, cccid=4 00:24:19.752 [2024-11-20 11:25:12.351542] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ac0700) on tqpair(0x1a5e690): expected_datao=0, payload_size=1024 00:24:19.752 [2024-11-20 11:25:12.351547] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.752 [2024-11-20 11:25:12.351554] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:19.752 [2024-11-20 11:25:12.351558] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:19.752 [2024-11-20 11:25:12.351564] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.752 [2024-11-20 11:25:12.351569] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.752 [2024-11-20 11:25:12.351573] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.752 [2024-11-20 11:25:12.351577] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac0880) on tqpair=0x1a5e690 00:24:19.752 [2024-11-20 11:25:12.393374] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.752 [2024-11-20 11:25:12.393386] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.752 [2024-11-20 11:25:12.393390] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.752 [2024-11-20 11:25:12.393394] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac0700) on tqpair=0x1a5e690 00:24:19.752 [2024-11-20 11:25:12.393408] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.752 [2024-11-20 11:25:12.393412] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a5e690) 00:24:19.752 [2024-11-20 11:25:12.393419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.752 [2024-11-20 11:25:12.393437] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac0700, cid 4, qid 0 00:24:19.752 [2024-11-20 11:25:12.393668] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:19.752 [2024-11-20 11:25:12.393675] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:19.752 [2024-11-20 11:25:12.393678] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:19.753 [2024-11-20 11:25:12.393682] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a5e690): datao=0, datal=3072, cccid=4 00:24:19.753 [2024-11-20 11:25:12.393687] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ac0700) on tqpair(0x1a5e690): expected_datao=0, payload_size=3072 00:24:19.753 [2024-11-20 11:25:12.393691] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.753 [2024-11-20 11:25:12.393698] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:19.753 [2024-11-20 11:25:12.393702] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:19.753 [2024-11-20 11:25:12.393847] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.753 [2024-11-20 11:25:12.393853] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.753 [2024-11-20 11:25:12.393856] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.753 [2024-11-20 11:25:12.393861] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac0700) on tqpair=0x1a5e690 00:24:19.753 [2024-11-20 11:25:12.393876] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.753 [2024-11-20 11:25:12.393880] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a5e690) 00:24:19.753 [2024-11-20 11:25:12.393886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.753 [2024-11-20 11:25:12.393900] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac0700, cid 4, qid 0 00:24:19.753 [2024-11-20 11:25:12.394124] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:19.753 [2024-11-20 11:25:12.394131] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:19.753 [2024-11-20 11:25:12.394134] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:19.753 [2024-11-20 11:25:12.394138] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a5e690): datao=0, datal=8, cccid=4 00:24:19.753 [2024-11-20 11:25:12.394142] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ac0700) on tqpair(0x1a5e690): expected_datao=0, payload_size=8 00:24:19.753 [2024-11-20 11:25:12.394147] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.753 [2024-11-20 11:25:12.394153] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:19.753 [2024-11-20 11:25:12.394157] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:19.753 [2024-11-20 11:25:12.438169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.753 [2024-11-20 11:25:12.438182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.753 [2024-11-20 11:25:12.438186] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.753 [2024-11-20 11:25:12.438190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac0700) on tqpair=0x1a5e690 00:24:19.753 ===================================================== 00:24:19.753 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:19.753 ===================================================== 00:24:19.753 Controller Capabilities/Features 00:24:19.753 ================================ 00:24:19.753 Vendor ID: 0000 00:24:19.753 Subsystem Vendor ID: 0000 00:24:19.753 Serial Number: .................... 00:24:19.753 Model Number: ........................................ 00:24:19.753 Firmware Version: 25.01 00:24:19.753 Recommended Arb Burst: 0 00:24:19.753 IEEE OUI Identifier: 00 00 00 00:24:19.753 Multi-path I/O 00:24:19.753 May have multiple subsystem ports: No 00:24:19.753 May have multiple controllers: No 00:24:19.753 Associated with SR-IOV VF: No 00:24:19.753 Max Data Transfer Size: 131072 00:24:19.753 Max Number of Namespaces: 0 00:24:19.753 Max Number of I/O Queues: 1024 00:24:19.753 NVMe Specification Version (VS): 1.3 00:24:19.753 NVMe Specification Version (Identify): 1.3 00:24:19.753 Maximum Queue Entries: 128 00:24:19.753 Contiguous Queues Required: Yes 00:24:19.753 Arbitration Mechanisms Supported 00:24:19.753 Weighted Round Robin: Not Supported 00:24:19.753 Vendor Specific: Not Supported 00:24:19.753 Reset Timeout: 15000 ms 00:24:19.753 Doorbell Stride: 4 bytes 00:24:19.753 NVM Subsystem Reset: Not Supported 00:24:19.753 Command Sets Supported 00:24:19.753 NVM Command Set: Supported 00:24:19.753 Boot Partition: Not Supported 00:24:19.753 Memory Page Size Minimum: 4096 bytes 00:24:19.753 Memory Page Size Maximum: 4096 bytes 00:24:19.753 Persistent Memory Region: Not Supported 00:24:19.753 Optional Asynchronous Events Supported 00:24:19.753 Namespace Attribute Notices: Not Supported 00:24:19.753 Firmware Activation Notices: Not Supported 00:24:19.753 ANA Change Notices: Not Supported 00:24:19.753 PLE Aggregate Log Change Notices: Not Supported 00:24:19.753 LBA Status Info Alert Notices: Not Supported 00:24:19.753 EGE Aggregate Log Change Notices: Not Supported 00:24:19.753 Normal NVM Subsystem Shutdown event: Not Supported 00:24:19.753 Zone Descriptor Change Notices: Not Supported 00:24:19.753 Discovery Log Change Notices: Supported 00:24:19.753 Controller Attributes 00:24:19.753 128-bit Host Identifier: Not Supported 00:24:19.753 Non-Operational Permissive Mode: Not Supported 00:24:19.753 NVM Sets: Not Supported 00:24:19.753 Read Recovery Levels: Not Supported 00:24:19.753 Endurance Groups: Not Supported 00:24:19.753 Predictable Latency Mode: Not Supported 00:24:19.753 Traffic Based Keep ALive: Not Supported 00:24:19.753 Namespace Granularity: Not Supported 00:24:19.753 SQ Associations: Not Supported 00:24:19.753 UUID List: Not Supported 00:24:19.753 Multi-Domain Subsystem: Not Supported 00:24:19.753 Fixed Capacity Management: Not Supported 00:24:19.753 Variable Capacity Management: Not Supported 00:24:19.753 Delete Endurance Group: Not Supported 00:24:19.753 Delete NVM Set: Not Supported 00:24:19.753 Extended LBA Formats Supported: Not Supported 00:24:19.753 Flexible Data Placement Supported: Not Supported 00:24:19.753 00:24:19.753 Controller Memory Buffer Support 00:24:19.753 ================================ 00:24:19.753 Supported: No 00:24:19.753 00:24:19.753 Persistent Memory Region Support 00:24:19.753 ================================ 00:24:19.753 Supported: No 00:24:19.753 00:24:19.753 Admin Command Set Attributes 00:24:19.753 ============================ 00:24:19.753 Security Send/Receive: Not Supported 00:24:19.753 Format NVM: Not Supported 00:24:19.753 Firmware Activate/Download: Not Supported 00:24:19.753 Namespace Management: Not Supported 00:24:19.753 Device Self-Test: Not Supported 00:24:19.753 Directives: Not Supported 00:24:19.753 NVMe-MI: Not Supported 00:24:19.753 Virtualization Management: Not Supported 00:24:19.753 Doorbell Buffer Config: Not Supported 00:24:19.753 Get LBA Status Capability: Not Supported 00:24:19.753 Command & Feature Lockdown Capability: Not Supported 00:24:19.753 Abort Command Limit: 1 00:24:19.753 Async Event Request Limit: 4 00:24:19.753 Number of Firmware Slots: N/A 00:24:19.753 Firmware Slot 1 Read-Only: N/A 00:24:19.753 Firmware Activation Without Reset: N/A 00:24:19.753 Multiple Update Detection Support: N/A 00:24:19.753 Firmware Update Granularity: No Information Provided 00:24:19.753 Per-Namespace SMART Log: No 00:24:19.753 Asymmetric Namespace Access Log Page: Not Supported 00:24:19.753 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:19.753 Command Effects Log Page: Not Supported 00:24:19.753 Get Log Page Extended Data: Supported 00:24:19.753 Telemetry Log Pages: Not Supported 00:24:19.753 Persistent Event Log Pages: Not Supported 00:24:19.753 Supported Log Pages Log Page: May Support 00:24:19.753 Commands Supported & Effects Log Page: Not Supported 00:24:19.753 Feature Identifiers & Effects Log Page:May Support 00:24:19.753 NVMe-MI Commands & Effects Log Page: May Support 00:24:19.753 Data Area 4 for Telemetry Log: Not Supported 00:24:19.753 Error Log Page Entries Supported: 128 00:24:19.753 Keep Alive: Not Supported 00:24:19.753 00:24:19.753 NVM Command Set Attributes 00:24:19.753 ========================== 00:24:19.753 Submission Queue Entry Size 00:24:19.753 Max: 1 00:24:19.753 Min: 1 00:24:19.753 Completion Queue Entry Size 00:24:19.753 Max: 1 00:24:19.753 Min: 1 00:24:19.753 Number of Namespaces: 0 00:24:19.753 Compare Command: Not Supported 00:24:19.753 Write Uncorrectable Command: Not Supported 00:24:19.753 Dataset Management Command: Not Supported 00:24:19.753 Write Zeroes Command: Not Supported 00:24:19.753 Set Features Save Field: Not Supported 00:24:19.753 Reservations: Not Supported 00:24:19.753 Timestamp: Not Supported 00:24:19.753 Copy: Not Supported 00:24:19.753 Volatile Write Cache: Not Present 00:24:19.753 Atomic Write Unit (Normal): 1 00:24:19.753 Atomic Write Unit (PFail): 1 00:24:19.753 Atomic Compare & Write Unit: 1 00:24:19.753 Fused Compare & Write: Supported 00:24:19.753 Scatter-Gather List 00:24:19.753 SGL Command Set: Supported 00:24:19.753 SGL Keyed: Supported 00:24:19.753 SGL Bit Bucket Descriptor: Not Supported 00:24:19.753 SGL Metadata Pointer: Not Supported 00:24:19.753 Oversized SGL: Not Supported 00:24:19.753 SGL Metadata Address: Not Supported 00:24:19.753 SGL Offset: Supported 00:24:19.753 Transport SGL Data Block: Not Supported 00:24:19.753 Replay Protected Memory Block: Not Supported 00:24:19.753 00:24:19.753 Firmware Slot Information 00:24:19.753 ========================= 00:24:19.753 Active slot: 0 00:24:19.753 00:24:19.753 00:24:19.753 Error Log 00:24:19.753 ========= 00:24:19.753 00:24:19.753 Active Namespaces 00:24:19.754 ================= 00:24:19.754 Discovery Log Page 00:24:19.754 ================== 00:24:19.754 Generation Counter: 2 00:24:19.754 Number of Records: 2 00:24:19.754 Record Format: 0 00:24:19.754 00:24:19.754 Discovery Log Entry 0 00:24:19.754 ---------------------- 00:24:19.754 Transport Type: 3 (TCP) 00:24:19.754 Address Family: 1 (IPv4) 00:24:19.754 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:19.754 Entry Flags: 00:24:19.754 Duplicate Returned Information: 1 00:24:19.754 Explicit Persistent Connection Support for Discovery: 1 00:24:19.754 Transport Requirements: 00:24:19.754 Secure Channel: Not Required 00:24:19.754 Port ID: 0 (0x0000) 00:24:19.754 Controller ID: 65535 (0xffff) 00:24:19.754 Admin Max SQ Size: 128 00:24:19.754 Transport Service Identifier: 4420 00:24:19.754 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:19.754 Transport Address: 10.0.0.2 00:24:19.754 Discovery Log Entry 1 00:24:19.754 ---------------------- 00:24:19.754 Transport Type: 3 (TCP) 00:24:19.754 Address Family: 1 (IPv4) 00:24:19.754 Subsystem Type: 2 (NVM Subsystem) 00:24:19.754 Entry Flags: 00:24:19.754 Duplicate Returned Information: 0 00:24:19.754 Explicit Persistent Connection Support for Discovery: 0 00:24:19.754 Transport Requirements: 00:24:19.754 Secure Channel: Not Required 00:24:19.754 Port ID: 0 (0x0000) 00:24:19.754 Controller ID: 65535 (0xffff) 00:24:19.754 Admin Max SQ Size: 128 00:24:19.754 Transport Service Identifier: 4420 00:24:19.754 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:19.754 Transport Address: 10.0.0.2 [2024-11-20 11:25:12.438302] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:24:19.754 [2024-11-20 11:25:12.438314] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac0100) on tqpair=0x1a5e690 00:24:19.754 [2024-11-20 11:25:12.438322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.754 [2024-11-20 11:25:12.438328] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac0280) on tqpair=0x1a5e690 00:24:19.754 [2024-11-20 11:25:12.438332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.754 [2024-11-20 11:25:12.438337] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac0400) on tqpair=0x1a5e690 00:24:19.754 [2024-11-20 11:25:12.438342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.754 [2024-11-20 11:25:12.438347] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac0580) on tqpair=0x1a5e690 00:24:19.754 [2024-11-20 11:25:12.438352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.754 [2024-11-20 11:25:12.438364] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.754 [2024-11-20 11:25:12.438368] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.754 [2024-11-20 11:25:12.438372] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5e690) 00:24:19.754 [2024-11-20 11:25:12.438379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.754 [2024-11-20 11:25:12.438394] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac0580, cid 3, qid 0 00:24:19.754 [2024-11-20 11:25:12.438596] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.754 [2024-11-20 11:25:12.438603] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.754 [2024-11-20 11:25:12.438607] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.754 [2024-11-20 11:25:12.438611] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac0580) on tqpair=0x1a5e690 00:24:19.754 [2024-11-20 11:25:12.438620] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.754 [2024-11-20 11:25:12.438624] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.754 [2024-11-20 11:25:12.438628] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5e690) 00:24:19.754 [2024-11-20 11:25:12.438635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.754 [2024-11-20 11:25:12.438649] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac0580, cid 3, qid 0 00:24:19.754 [2024-11-20 11:25:12.438875] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.754 [2024-11-20 11:25:12.438882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.754 [2024-11-20 11:25:12.438885] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.754 [2024-11-20 11:25:12.438889] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac0580) on tqpair=0x1a5e690 00:24:19.754 [2024-11-20 11:25:12.438894] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:24:19.754 [2024-11-20 11:25:12.438899] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:24:19.754 [2024-11-20 11:25:12.438909] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.754 [2024-11-20 11:25:12.438913] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.754 [2024-11-20 11:25:12.438916] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5e690) 00:24:19.754 [2024-11-20 11:25:12.438923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.754 [2024-11-20 11:25:12.438933] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac0580, cid 3, qid 0 00:24:19.754 [2024-11-20 11:25:12.439143] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.754 [2024-11-20 11:25:12.439149] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.754 [2024-11-20 11:25:12.439153] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.754 [2024-11-20 11:25:12.439157] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac0580) on tqpair=0x1a5e690 00:24:19.754 [2024-11-20 11:25:12.439174] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.754 [2024-11-20 11:25:12.439178] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.754 [2024-11-20 11:25:12.439182] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5e690) 00:24:19.754 [2024-11-20 11:25:12.439188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.754 [2024-11-20 11:25:12.439199] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac0580, cid 3, qid 0 00:24:19.754 [2024-11-20 11:25:12.439412] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.754 [2024-11-20 11:25:12.439419] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.754 [2024-11-20 11:25:12.439422] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.754 [2024-11-20 11:25:12.439426] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac0580) on tqpair=0x1a5e690 00:24:19.754 [2024-11-20 11:25:12.439436] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.754 [2024-11-20 11:25:12.439440] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.754 [2024-11-20 11:25:12.439443] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5e690) 00:24:19.754 [2024-11-20 11:25:12.439450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.754 [2024-11-20 11:25:12.439460] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac0580, cid 3, qid 0 00:24:19.754 [2024-11-20 11:25:12.439624] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.754 [2024-11-20 11:25:12.439631] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.754 [2024-11-20 11:25:12.439634] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.754 [2024-11-20 11:25:12.439640] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac0580) on tqpair=0x1a5e690 00:24:19.754 [2024-11-20 11:25:12.439650] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.754 [2024-11-20 11:25:12.439654] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.754 [2024-11-20 11:25:12.439658] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5e690) 00:24:19.754 [2024-11-20 11:25:12.439664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.754 [2024-11-20 11:25:12.439675] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac0580, cid 3, qid 0 00:24:19.754 [2024-11-20 11:25:12.439857] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.754 [2024-11-20 11:25:12.439863] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.754 [2024-11-20 11:25:12.439867] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.754 [2024-11-20 11:25:12.439871] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac0580) on tqpair=0x1a5e690 00:24:19.754 [2024-11-20 11:25:12.439881] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.754 [2024-11-20 11:25:12.439885] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.754 [2024-11-20 11:25:12.439888] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5e690) 00:24:19.754 [2024-11-20 11:25:12.439895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.754 [2024-11-20 11:25:12.439906] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac0580, cid 3, qid 0 00:24:19.754 [2024-11-20 11:25:12.440078] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.754 [2024-11-20 11:25:12.440084] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.754 [2024-11-20 11:25:12.440088] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.754 [2024-11-20 11:25:12.440091] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac0580) on tqpair=0x1a5e690 00:24:19.754 [2024-11-20 11:25:12.440101] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.754 [2024-11-20 11:25:12.440105] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.754 [2024-11-20 11:25:12.440109] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5e690) 00:24:19.754 [2024-11-20 11:25:12.440116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.754 [2024-11-20 11:25:12.440126] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac0580, cid 3, qid 0 00:24:19.754 [2024-11-20 11:25:12.440308] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.754 [2024-11-20 11:25:12.440316] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.755 [2024-11-20 11:25:12.440319] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.755 [2024-11-20 11:25:12.440323] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac0580) on tqpair=0x1a5e690 00:24:19.755 [2024-11-20 11:25:12.440333] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.755 [2024-11-20 11:25:12.440337] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.755 [2024-11-20 11:25:12.440340] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5e690) 00:24:19.755 [2024-11-20 11:25:12.440347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.755 [2024-11-20 11:25:12.440358] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac0580, cid 3, qid 0 00:24:19.755 [2024-11-20 11:25:12.440529] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.755 [2024-11-20 11:25:12.440536] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.755 [2024-11-20 11:25:12.440539] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.755 [2024-11-20 11:25:12.440543] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac0580) on tqpair=0x1a5e690 00:24:19.755 [2024-11-20 11:25:12.440555] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.755 [2024-11-20 11:25:12.440559] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.755 [2024-11-20 11:25:12.440563] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5e690) 00:24:19.755 [2024-11-20 11:25:12.440570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.755 [2024-11-20 11:25:12.440580] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac0580, cid 3, qid 0 00:24:19.755 [2024-11-20 11:25:12.440759] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.755 [2024-11-20 11:25:12.440765] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.755 [2024-11-20 11:25:12.440768] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.755 [2024-11-20 11:25:12.440772] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac0580) on tqpair=0x1a5e690 00:24:19.755 [2024-11-20 11:25:12.440782] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.755 [2024-11-20 11:25:12.440786] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.755 [2024-11-20 11:25:12.440789] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5e690) 00:24:19.755 [2024-11-20 11:25:12.440796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.755 [2024-11-20 11:25:12.440806] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac0580, cid 3, qid 0 00:24:19.755 [2024-11-20 11:25:12.441001] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.755 [2024-11-20 11:25:12.441007] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.755 [2024-11-20 11:25:12.441010] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.755 [2024-11-20 11:25:12.441014] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac0580) on tqpair=0x1a5e690 00:24:19.755 [2024-11-20 11:25:12.441024] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.755 [2024-11-20 11:25:12.441028] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.755 [2024-11-20 11:25:12.441032] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5e690) 00:24:19.755 [2024-11-20 11:25:12.441039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.755 [2024-11-20 11:25:12.441049] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac0580, cid 3, qid 0 00:24:19.755 [2024-11-20 11:25:12.441226] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.755 [2024-11-20 11:25:12.441232] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.755 [2024-11-20 11:25:12.441236] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.755 [2024-11-20 11:25:12.441240] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac0580) on tqpair=0x1a5e690 00:24:19.755 [2024-11-20 11:25:12.441249] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.755 [2024-11-20 11:25:12.441253] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.755 [2024-11-20 11:25:12.441257] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5e690) 00:24:19.755 [2024-11-20 11:25:12.441264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.755 [2024-11-20 11:25:12.441274] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac0580, cid 3, qid 0 00:24:19.755 [2024-11-20 11:25:12.441478] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.755 [2024-11-20 11:25:12.441484] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.755 [2024-11-20 11:25:12.441488] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.755 [2024-11-20 11:25:12.441492] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac0580) on tqpair=0x1a5e690 00:24:19.755 [2024-11-20 11:25:12.441502] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.755 [2024-11-20 11:25:12.441510] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.755 [2024-11-20 11:25:12.441514] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5e690) 00:24:19.755 [2024-11-20 11:25:12.441521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.755 [2024-11-20 11:25:12.441531] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac0580, cid 3, qid 0 00:24:19.755 [2024-11-20 11:25:12.441712] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.755 [2024-11-20 11:25:12.441718] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.755 [2024-11-20 11:25:12.441722] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.755 [2024-11-20 11:25:12.441726] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac0580) on tqpair=0x1a5e690 00:24:19.755 [2024-11-20 11:25:12.441735] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.755 [2024-11-20 11:25:12.441740] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.755 [2024-11-20 11:25:12.441743] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5e690) 00:24:19.755 [2024-11-20 11:25:12.441750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.755 [2024-11-20 11:25:12.441760] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac0580, cid 3, qid 0 00:24:19.755 [2024-11-20 11:25:12.441929] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.755 [2024-11-20 11:25:12.441935] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.755 [2024-11-20 11:25:12.441939] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.755 [2024-11-20 11:25:12.441942] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac0580) on tqpair=0x1a5e690 00:24:19.755 [2024-11-20 11:25:12.441952] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.755 [2024-11-20 11:25:12.441956] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.755 [2024-11-20 11:25:12.441960] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5e690) 00:24:19.755 [2024-11-20 11:25:12.441967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.755 [2024-11-20 11:25:12.441977] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac0580, cid 3, qid 0 00:24:19.755 [2024-11-20 11:25:12.446168] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.755 [2024-11-20 11:25:12.446178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.755 [2024-11-20 11:25:12.446181] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.755 [2024-11-20 11:25:12.446185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac0580) on tqpair=0x1a5e690 00:24:19.755 [2024-11-20 11:25:12.446196] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:19.755 [2024-11-20 11:25:12.446200] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:19.755 [2024-11-20 11:25:12.446204] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5e690) 00:24:19.755 [2024-11-20 11:25:12.446210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.755 [2024-11-20 11:25:12.446222] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ac0580, cid 3, qid 0 00:24:19.755 [2024-11-20 11:25:12.446412] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:19.755 [2024-11-20 11:25:12.446420] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:19.755 [2024-11-20 11:25:12.446423] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:19.755 [2024-11-20 11:25:12.446427] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ac0580) on tqpair=0x1a5e690 00:24:19.755 [2024-11-20 11:25:12.446435] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:24:19.755 00:24:19.756 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:20.021 [2024-11-20 11:25:12.494088] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:24:20.021 [2024-11-20 11:25:12.494131] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829786 ] 00:24:20.021 [2024-11-20 11:25:12.548738] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:24:20.021 [2024-11-20 11:25:12.548800] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:20.021 [2024-11-20 11:25:12.548806] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:20.021 [2024-11-20 11:25:12.548828] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:20.021 [2024-11-20 11:25:12.548840] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:20.021 [2024-11-20 11:25:12.553485] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:24:20.021 [2024-11-20 11:25:12.553523] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1412690 0 00:24:20.021 [2024-11-20 11:25:12.561178] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:20.021 [2024-11-20 11:25:12.561193] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:20.021 [2024-11-20 11:25:12.561198] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:20.021 [2024-11-20 11:25:12.561202] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:20.021 [2024-11-20 11:25:12.561236] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.021 [2024-11-20 11:25:12.561242] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.021 [2024-11-20 11:25:12.561246] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1412690) 00:24:20.021 [2024-11-20 11:25:12.561259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:20.021 [2024-11-20 11:25:12.561282] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1474100, cid 0, qid 0 00:24:20.021 [2024-11-20 11:25:12.569178] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.021 [2024-11-20 11:25:12.569189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.021 [2024-11-20 11:25:12.569192] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.021 [2024-11-20 11:25:12.569197] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1474100) on tqpair=0x1412690 00:24:20.021 [2024-11-20 11:25:12.569210] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:20.021 [2024-11-20 11:25:12.569232] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:24:20.021 [2024-11-20 11:25:12.569238] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:24:20.021 [2024-11-20 11:25:12.569252] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.022 [2024-11-20 11:25:12.569257] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.022 [2024-11-20 11:25:12.569261] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1412690) 00:24:20.022 [2024-11-20 11:25:12.569270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.022 [2024-11-20 11:25:12.569292] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1474100, cid 0, qid 0 00:24:20.022 [2024-11-20 11:25:12.569483] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.022 [2024-11-20 11:25:12.569490] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.022 [2024-11-20 11:25:12.569494] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.022 [2024-11-20 11:25:12.569498] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1474100) on tqpair=0x1412690 00:24:20.022 [2024-11-20 11:25:12.569503] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:24:20.022 [2024-11-20 11:25:12.569511] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:24:20.022 [2024-11-20 11:25:12.569518] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.022 [2024-11-20 11:25:12.569522] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.022 [2024-11-20 11:25:12.569526] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1412690) 00:24:20.022 [2024-11-20 11:25:12.569533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.022 [2024-11-20 11:25:12.569544] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1474100, cid 0, qid 0 00:24:20.022 [2024-11-20 11:25:12.569773] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.022 [2024-11-20 11:25:12.569781] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.022 [2024-11-20 11:25:12.569785] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.022 [2024-11-20 11:25:12.569789] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1474100) on tqpair=0x1412690 00:24:20.022 [2024-11-20 11:25:12.569794] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:24:20.022 [2024-11-20 11:25:12.569802] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:20.022 [2024-11-20 11:25:12.569809] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.022 [2024-11-20 11:25:12.569813] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.022 [2024-11-20 11:25:12.569817] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1412690) 00:24:20.022 [2024-11-20 11:25:12.569824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.022 [2024-11-20 11:25:12.569834] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1474100, cid 0, qid 0 00:24:20.022 [2024-11-20 11:25:12.570025] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.022 [2024-11-20 11:25:12.570032] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.022 [2024-11-20 11:25:12.570035] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.022 [2024-11-20 11:25:12.570039] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1474100) on tqpair=0x1412690 00:24:20.022 [2024-11-20 11:25:12.570044] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:20.022 [2024-11-20 11:25:12.570054] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.022 [2024-11-20 11:25:12.570058] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.022 [2024-11-20 11:25:12.570062] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1412690) 00:24:20.022 [2024-11-20 11:25:12.570068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.022 [2024-11-20 11:25:12.570079] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1474100, cid 0, qid 0 00:24:20.022 [2024-11-20 11:25:12.570262] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.022 [2024-11-20 11:25:12.570269] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.022 [2024-11-20 11:25:12.570275] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.022 [2024-11-20 11:25:12.570279] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1474100) on tqpair=0x1412690 00:24:20.022 [2024-11-20 11:25:12.570284] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:20.022 [2024-11-20 11:25:12.570289] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:20.022 [2024-11-20 11:25:12.570297] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:20.022 [2024-11-20 11:25:12.570406] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:24:20.022 [2024-11-20 11:25:12.570410] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:20.022 [2024-11-20 11:25:12.570418] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.022 [2024-11-20 11:25:12.570422] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.022 [2024-11-20 11:25:12.570425] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1412690) 00:24:20.022 [2024-11-20 11:25:12.570432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.022 [2024-11-20 11:25:12.570443] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1474100, cid 0, qid 0 00:24:20.022 [2024-11-20 11:25:12.570644] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.022 [2024-11-20 11:25:12.570650] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.022 [2024-11-20 11:25:12.570653] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.022 [2024-11-20 11:25:12.570657] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1474100) on tqpair=0x1412690 00:24:20.022 [2024-11-20 11:25:12.570662] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:20.022 [2024-11-20 11:25:12.570672] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.022 [2024-11-20 11:25:12.570675] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.022 [2024-11-20 11:25:12.570679] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1412690) 00:24:20.022 [2024-11-20 11:25:12.570686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.022 [2024-11-20 11:25:12.570696] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1474100, cid 0, qid 0 00:24:20.022 [2024-11-20 11:25:12.570865] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.022 [2024-11-20 11:25:12.570871] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.022 [2024-11-20 11:25:12.570874] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.022 [2024-11-20 11:25:12.570878] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1474100) on tqpair=0x1412690 00:24:20.022 [2024-11-20 11:25:12.570883] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:20.022 [2024-11-20 11:25:12.570887] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:20.022 [2024-11-20 11:25:12.570895] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:24:20.022 [2024-11-20 11:25:12.570903] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:20.022 [2024-11-20 11:25:12.570912] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.022 [2024-11-20 11:25:12.570916] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1412690) 00:24:20.022 [2024-11-20 11:25:12.570925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.022 [2024-11-20 11:25:12.570937] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1474100, cid 0, qid 0 00:24:20.022 [2024-11-20 11:25:12.571180] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.022 [2024-11-20 11:25:12.571187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.022 [2024-11-20 11:25:12.571190] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.022 [2024-11-20 11:25:12.571194] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1412690): datao=0, datal=4096, cccid=0 00:24:20.022 [2024-11-20 11:25:12.571199] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1474100) on tqpair(0x1412690): expected_datao=0, payload_size=4096 00:24:20.022 [2024-11-20 11:25:12.571203] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.022 [2024-11-20 11:25:12.571211] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.022 [2024-11-20 11:25:12.571215] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.022 [2024-11-20 11:25:12.571329] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.022 [2024-11-20 11:25:12.571336] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.022 [2024-11-20 11:25:12.571339] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.022 [2024-11-20 11:25:12.571343] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1474100) on tqpair=0x1412690 00:24:20.022 [2024-11-20 11:25:12.571351] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:24:20.022 [2024-11-20 11:25:12.571356] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:24:20.022 [2024-11-20 11:25:12.571361] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:24:20.022 [2024-11-20 11:25:12.571371] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:24:20.023 [2024-11-20 11:25:12.571375] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:24:20.023 [2024-11-20 11:25:12.571380] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:24:20.023 [2024-11-20 11:25:12.571391] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:20.023 [2024-11-20 11:25:12.571398] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.023 [2024-11-20 11:25:12.571402] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.023 [2024-11-20 11:25:12.571406] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1412690) 00:24:20.023 [2024-11-20 11:25:12.571413] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:20.023 [2024-11-20 11:25:12.571425] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1474100, cid 0, qid 0 00:24:20.023 [2024-11-20 11:25:12.571610] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.023 [2024-11-20 11:25:12.571616] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.023 [2024-11-20 11:25:12.571620] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.023 [2024-11-20 11:25:12.571623] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1474100) on tqpair=0x1412690 00:24:20.023 [2024-11-20 11:25:12.571630] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.023 [2024-11-20 11:25:12.571634] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.023 [2024-11-20 11:25:12.571638] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1412690) 00:24:20.023 [2024-11-20 11:25:12.571644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.023 [2024-11-20 11:25:12.571653] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.023 [2024-11-20 11:25:12.571657] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.023 [2024-11-20 11:25:12.571660] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1412690) 00:24:20.023 [2024-11-20 11:25:12.571666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.023 [2024-11-20 11:25:12.571672] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.023 [2024-11-20 11:25:12.571676] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.023 [2024-11-20 11:25:12.571680] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1412690) 00:24:20.023 [2024-11-20 11:25:12.571685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.023 [2024-11-20 11:25:12.571691] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.023 [2024-11-20 11:25:12.571695] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.023 [2024-11-20 11:25:12.571699] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1412690) 00:24:20.023 [2024-11-20 11:25:12.571704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.023 [2024-11-20 11:25:12.571709] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:20.023 [2024-11-20 11:25:12.571717] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:20.023 [2024-11-20 11:25:12.571724] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.023 [2024-11-20 11:25:12.571727] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1412690) 00:24:20.023 [2024-11-20 11:25:12.571734] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.023 [2024-11-20 11:25:12.571747] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1474100, cid 0, qid 0 00:24:20.023 [2024-11-20 11:25:12.571752] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1474280, cid 1, qid 0 00:24:20.023 [2024-11-20 11:25:12.571757] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1474400, cid 2, qid 0 00:24:20.023 [2024-11-20 11:25:12.571761] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1474580, cid 3, qid 0 00:24:20.023 [2024-11-20 11:25:12.571766] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1474700, cid 4, qid 0 00:24:20.023 [2024-11-20 11:25:12.572003] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.023 [2024-11-20 11:25:12.572010] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.023 [2024-11-20 11:25:12.572013] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.023 [2024-11-20 11:25:12.572017] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1474700) on tqpair=0x1412690 00:24:20.023 [2024-11-20 11:25:12.572025] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:24:20.023 [2024-11-20 11:25:12.572030] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:20.023 [2024-11-20 11:25:12.572039] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:24:20.023 [2024-11-20 11:25:12.572045] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:20.023 [2024-11-20 11:25:12.572051] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.023 [2024-11-20 11:25:12.572055] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.023 [2024-11-20 11:25:12.572061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1412690) 00:24:20.023 [2024-11-20 11:25:12.572067] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:20.023 [2024-11-20 11:25:12.572078] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1474700, cid 4, qid 0 00:24:20.023 [2024-11-20 11:25:12.572251] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.023 [2024-11-20 11:25:12.572259] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.023 [2024-11-20 11:25:12.572263] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.023 [2024-11-20 11:25:12.572267] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1474700) on tqpair=0x1412690 00:24:20.023 [2024-11-20 11:25:12.572334] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:24:20.023 [2024-11-20 11:25:12.572343] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:20.023 [2024-11-20 11:25:12.572351] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.023 [2024-11-20 11:25:12.572355] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1412690) 00:24:20.023 [2024-11-20 11:25:12.572361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.023 [2024-11-20 11:25:12.572372] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1474700, cid 4, qid 0 00:24:20.023 [2024-11-20 11:25:12.572599] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.023 [2024-11-20 11:25:12.572605] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.023 [2024-11-20 11:25:12.572609] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.023 [2024-11-20 11:25:12.572612] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1412690): datao=0, datal=4096, cccid=4 00:24:20.023 [2024-11-20 11:25:12.572617] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1474700) on tqpair(0x1412690): expected_datao=0, payload_size=4096 00:24:20.023 [2024-11-20 11:25:12.572621] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.023 [2024-11-20 11:25:12.572638] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.023 [2024-11-20 11:25:12.572642] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.023 [2024-11-20 11:25:12.572807] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.023 [2024-11-20 11:25:12.572814] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.023 [2024-11-20 11:25:12.572817] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.023 [2024-11-20 11:25:12.572821] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1474700) on tqpair=0x1412690 00:24:20.023 [2024-11-20 11:25:12.572837] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:24:20.023 [2024-11-20 11:25:12.572847] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:24:20.024 [2024-11-20 11:25:12.572856] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:24:20.024 [2024-11-20 11:25:12.572863] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.024 [2024-11-20 11:25:12.572867] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1412690) 00:24:20.024 [2024-11-20 11:25:12.572874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.024 [2024-11-20 11:25:12.572885] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1474700, cid 4, qid 0 00:24:20.024 [2024-11-20 11:25:12.573099] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.024 [2024-11-20 11:25:12.573106] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.024 [2024-11-20 11:25:12.573111] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.024 [2024-11-20 11:25:12.573115] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1412690): datao=0, datal=4096, cccid=4 00:24:20.024 [2024-11-20 11:25:12.573119] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1474700) on tqpair(0x1412690): expected_datao=0, payload_size=4096 00:24:20.024 [2024-11-20 11:25:12.573124] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.024 [2024-11-20 11:25:12.573139] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.024 [2024-11-20 11:25:12.573143] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.024 [2024-11-20 11:25:12.577173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.024 [2024-11-20 11:25:12.577182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.024 [2024-11-20 11:25:12.577186] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.024 [2024-11-20 11:25:12.577190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1474700) on tqpair=0x1412690 00:24:20.024 [2024-11-20 11:25:12.577205] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:20.024 [2024-11-20 11:25:12.577215] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:20.024 [2024-11-20 11:25:12.577223] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.024 [2024-11-20 11:25:12.577226] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1412690) 00:24:20.024 [2024-11-20 11:25:12.577233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.024 [2024-11-20 11:25:12.577245] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1474700, cid 4, qid 0 00:24:20.024 [2024-11-20 11:25:12.577422] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.024 [2024-11-20 11:25:12.577428] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.024 [2024-11-20 11:25:12.577432] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.024 [2024-11-20 11:25:12.577435] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1412690): datao=0, datal=4096, cccid=4 00:24:20.024 [2024-11-20 11:25:12.577440] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1474700) on tqpair(0x1412690): expected_datao=0, payload_size=4096 00:24:20.024 [2024-11-20 11:25:12.577444] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.024 [2024-11-20 11:25:12.577460] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.024 [2024-11-20 11:25:12.577464] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.024 [2024-11-20 11:25:12.577645] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.024 [2024-11-20 11:25:12.577651] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.024 [2024-11-20 11:25:12.577655] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.024 [2024-11-20 11:25:12.577659] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1474700) on tqpair=0x1412690 00:24:20.024 [2024-11-20 11:25:12.577666] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:20.024 [2024-11-20 11:25:12.577674] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:24:20.024 [2024-11-20 11:25:12.577683] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:24:20.024 [2024-11-20 11:25:12.577690] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:20.024 [2024-11-20 11:25:12.577695] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:20.024 [2024-11-20 11:25:12.577703] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:24:20.024 [2024-11-20 11:25:12.577708] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:24:20.024 [2024-11-20 11:25:12.577713] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:24:20.024 [2024-11-20 11:25:12.577718] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:24:20.024 [2024-11-20 11:25:12.577735] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.024 [2024-11-20 11:25:12.577739] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1412690) 00:24:20.024 [2024-11-20 11:25:12.577746] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.024 [2024-11-20 11:25:12.577753] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.024 [2024-11-20 11:25:12.577757] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.024 [2024-11-20 11:25:12.577760] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1412690) 00:24:20.024 [2024-11-20 11:25:12.577767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.024 [2024-11-20 11:25:12.577781] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1474700, cid 4, qid 0 00:24:20.024 [2024-11-20 11:25:12.577787] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1474880, cid 5, qid 0 00:24:20.024 [2024-11-20 11:25:12.577986] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.024 [2024-11-20 11:25:12.577992] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.024 [2024-11-20 11:25:12.577996] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.024 [2024-11-20 11:25:12.577999] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1474700) on tqpair=0x1412690 00:24:20.024 [2024-11-20 11:25:12.578006] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.024 [2024-11-20 11:25:12.578012] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.024 [2024-11-20 11:25:12.578016] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.024 [2024-11-20 11:25:12.578019] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1474880) on tqpair=0x1412690 00:24:20.024 [2024-11-20 11:25:12.578029] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.024 [2024-11-20 11:25:12.578033] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1412690) 00:24:20.024 [2024-11-20 11:25:12.578039] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.024 [2024-11-20 11:25:12.578049] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1474880, cid 5, qid 0 00:24:20.024 [2024-11-20 11:25:12.578233] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.024 [2024-11-20 11:25:12.578240] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.024 [2024-11-20 11:25:12.578244] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.024 [2024-11-20 11:25:12.578247] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1474880) on tqpair=0x1412690 00:24:20.024 [2024-11-20 11:25:12.578257] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.024 [2024-11-20 11:25:12.578261] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1412690) 00:24:20.024 [2024-11-20 11:25:12.578267] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.024 [2024-11-20 11:25:12.578277] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1474880, cid 5, qid 0 00:24:20.024 [2024-11-20 11:25:12.578478] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.025 [2024-11-20 11:25:12.578485] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.025 [2024-11-20 11:25:12.578488] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.025 [2024-11-20 11:25:12.578492] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1474880) on tqpair=0x1412690 00:24:20.025 [2024-11-20 11:25:12.578502] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.025 [2024-11-20 11:25:12.578506] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1412690) 00:24:20.025 [2024-11-20 11:25:12.578512] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.025 [2024-11-20 11:25:12.578523] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1474880, cid 5, qid 0 00:24:20.025 [2024-11-20 11:25:12.578602] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.025 [2024-11-20 11:25:12.578609] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.025 [2024-11-20 11:25:12.578612] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.025 [2024-11-20 11:25:12.578617] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1474880) on tqpair=0x1412690 00:24:20.025 [2024-11-20 11:25:12.578633] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.025 [2024-11-20 11:25:12.578637] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1412690) 00:24:20.025 [2024-11-20 11:25:12.578644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.025 [2024-11-20 11:25:12.578652] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.025 [2024-11-20 11:25:12.578656] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1412690) 00:24:20.025 [2024-11-20 11:25:12.578662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.025 [2024-11-20 11:25:12.578669] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.025 [2024-11-20 11:25:12.578673] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1412690) 00:24:20.025 [2024-11-20 11:25:12.578679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.025 [2024-11-20 11:25:12.578687] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.025 [2024-11-20 11:25:12.578691] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1412690) 00:24:20.025 [2024-11-20 11:25:12.578697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.025 [2024-11-20 11:25:12.578709] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1474880, cid 5, qid 0 00:24:20.025 [2024-11-20 11:25:12.578714] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1474700, cid 4, qid 0 00:24:20.025 [2024-11-20 11:25:12.578719] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1474a00, cid 6, qid 0 00:24:20.025 [2024-11-20 11:25:12.578723] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1474b80, cid 7, qid 0 00:24:20.025 [2024-11-20 11:25:12.578886] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.025 [2024-11-20 11:25:12.578892] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.025 [2024-11-20 11:25:12.578895] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.025 [2024-11-20 11:25:12.578899] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1412690): datao=0, datal=8192, cccid=5 00:24:20.025 [2024-11-20 11:25:12.578904] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1474880) on tqpair(0x1412690): expected_datao=0, payload_size=8192 00:24:20.025 [2024-11-20 11:25:12.578908] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.025 [2024-11-20 11:25:12.578991] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.025 [2024-11-20 11:25:12.578995] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.025 [2024-11-20 11:25:12.579001] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.025 [2024-11-20 11:25:12.579007] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.025 [2024-11-20 11:25:12.579010] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.025 [2024-11-20 11:25:12.579014] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1412690): datao=0, datal=512, cccid=4 00:24:20.025 [2024-11-20 11:25:12.579018] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1474700) on tqpair(0x1412690): expected_datao=0, payload_size=512 00:24:20.025 [2024-11-20 11:25:12.579022] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.025 [2024-11-20 11:25:12.579029] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.025 [2024-11-20 11:25:12.579032] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.025 [2024-11-20 11:25:12.579038] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.025 [2024-11-20 11:25:12.579044] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.025 [2024-11-20 11:25:12.579047] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.025 [2024-11-20 11:25:12.579050] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1412690): datao=0, datal=512, cccid=6 00:24:20.025 [2024-11-20 11:25:12.579055] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1474a00) on tqpair(0x1412690): expected_datao=0, payload_size=512 00:24:20.025 [2024-11-20 11:25:12.579059] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.025 [2024-11-20 11:25:12.579065] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.025 [2024-11-20 11:25:12.579069] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.025 [2024-11-20 11:25:12.579074] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.025 [2024-11-20 11:25:12.579081] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.025 [2024-11-20 11:25:12.579084] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.025 [2024-11-20 11:25:12.579088] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1412690): datao=0, datal=4096, cccid=7 00:24:20.025 [2024-11-20 11:25:12.579092] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1474b80) on tqpair(0x1412690): expected_datao=0, payload_size=4096 00:24:20.025 [2024-11-20 11:25:12.579096] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.025 [2024-11-20 11:25:12.579103] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.025 [2024-11-20 11:25:12.579107] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.025 [2024-11-20 11:25:12.579121] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.025 [2024-11-20 11:25:12.579127] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.025 [2024-11-20 11:25:12.579130] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.025 [2024-11-20 11:25:12.579134] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1474880) on tqpair=0x1412690 00:24:20.025 [2024-11-20 11:25:12.579149] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.025 [2024-11-20 11:25:12.579155] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.025 [2024-11-20 11:25:12.579165] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.025 [2024-11-20 11:25:12.579169] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1474700) on tqpair=0x1412690 00:24:20.025 [2024-11-20 11:25:12.579180] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.025 [2024-11-20 11:25:12.579186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.025 [2024-11-20 11:25:12.579190] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.025 [2024-11-20 11:25:12.579194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1474a00) on tqpair=0x1412690 00:24:20.025 [2024-11-20 11:25:12.579201] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.025 [2024-11-20 11:25:12.579209] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.025 [2024-11-20 11:25:12.579212] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.025 [2024-11-20 11:25:12.579216] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1474b80) on tqpair=0x1412690 00:24:20.025 ===================================================== 00:24:20.025 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:20.025 ===================================================== 00:24:20.025 Controller Capabilities/Features 00:24:20.025 ================================ 00:24:20.025 Vendor ID: 8086 00:24:20.025 Subsystem Vendor ID: 8086 00:24:20.025 Serial Number: SPDK00000000000001 00:24:20.025 Model Number: SPDK bdev Controller 00:24:20.025 Firmware Version: 25.01 00:24:20.025 Recommended Arb Burst: 6 00:24:20.025 IEEE OUI Identifier: e4 d2 5c 00:24:20.025 Multi-path I/O 00:24:20.025 May have multiple subsystem ports: Yes 00:24:20.025 May have multiple controllers: Yes 00:24:20.026 Associated with SR-IOV VF: No 00:24:20.026 Max Data Transfer Size: 131072 00:24:20.026 Max Number of Namespaces: 32 00:24:20.026 Max Number of I/O Queues: 127 00:24:20.026 NVMe Specification Version (VS): 1.3 00:24:20.026 NVMe Specification Version (Identify): 1.3 00:24:20.026 Maximum Queue Entries: 128 00:24:20.026 Contiguous Queues Required: Yes 00:24:20.026 Arbitration Mechanisms Supported 00:24:20.026 Weighted Round Robin: Not Supported 00:24:20.026 Vendor Specific: Not Supported 00:24:20.026 Reset Timeout: 15000 ms 00:24:20.026 Doorbell Stride: 4 bytes 00:24:20.026 NVM Subsystem Reset: Not Supported 00:24:20.026 Command Sets Supported 00:24:20.026 NVM Command Set: Supported 00:24:20.026 Boot Partition: Not Supported 00:24:20.026 Memory Page Size Minimum: 4096 bytes 00:24:20.026 Memory Page Size Maximum: 4096 bytes 00:24:20.026 Persistent Memory Region: Not Supported 00:24:20.026 Optional Asynchronous Events Supported 00:24:20.026 Namespace Attribute Notices: Supported 00:24:20.026 Firmware Activation Notices: Not Supported 00:24:20.026 ANA Change Notices: Not Supported 00:24:20.026 PLE Aggregate Log Change Notices: Not Supported 00:24:20.026 LBA Status Info Alert Notices: Not Supported 00:24:20.026 EGE Aggregate Log Change Notices: Not Supported 00:24:20.026 Normal NVM Subsystem Shutdown event: Not Supported 00:24:20.026 Zone Descriptor Change Notices: Not Supported 00:24:20.026 Discovery Log Change Notices: Not Supported 00:24:20.026 Controller Attributes 00:24:20.026 128-bit Host Identifier: Supported 00:24:20.026 Non-Operational Permissive Mode: Not Supported 00:24:20.026 NVM Sets: Not Supported 00:24:20.026 Read Recovery Levels: Not Supported 00:24:20.026 Endurance Groups: Not Supported 00:24:20.026 Predictable Latency Mode: Not Supported 00:24:20.026 Traffic Based Keep ALive: Not Supported 00:24:20.026 Namespace Granularity: Not Supported 00:24:20.026 SQ Associations: Not Supported 00:24:20.026 UUID List: Not Supported 00:24:20.026 Multi-Domain Subsystem: Not Supported 00:24:20.026 Fixed Capacity Management: Not Supported 00:24:20.026 Variable Capacity Management: Not Supported 00:24:20.026 Delete Endurance Group: Not Supported 00:24:20.026 Delete NVM Set: Not Supported 00:24:20.026 Extended LBA Formats Supported: Not Supported 00:24:20.026 Flexible Data Placement Supported: Not Supported 00:24:20.026 00:24:20.026 Controller Memory Buffer Support 00:24:20.026 ================================ 00:24:20.026 Supported: No 00:24:20.026 00:24:20.026 Persistent Memory Region Support 00:24:20.026 ================================ 00:24:20.026 Supported: No 00:24:20.026 00:24:20.026 Admin Command Set Attributes 00:24:20.026 ============================ 00:24:20.026 Security Send/Receive: Not Supported 00:24:20.026 Format NVM: Not Supported 00:24:20.026 Firmware Activate/Download: Not Supported 00:24:20.026 Namespace Management: Not Supported 00:24:20.026 Device Self-Test: Not Supported 00:24:20.026 Directives: Not Supported 00:24:20.026 NVMe-MI: Not Supported 00:24:20.026 Virtualization Management: Not Supported 00:24:20.026 Doorbell Buffer Config: Not Supported 00:24:20.026 Get LBA Status Capability: Not Supported 00:24:20.026 Command & Feature Lockdown Capability: Not Supported 00:24:20.026 Abort Command Limit: 4 00:24:20.026 Async Event Request Limit: 4 00:24:20.026 Number of Firmware Slots: N/A 00:24:20.026 Firmware Slot 1 Read-Only: N/A 00:24:20.026 Firmware Activation Without Reset: N/A 00:24:20.026 Multiple Update Detection Support: N/A 00:24:20.026 Firmware Update Granularity: No Information Provided 00:24:20.026 Per-Namespace SMART Log: No 00:24:20.026 Asymmetric Namespace Access Log Page: Not Supported 00:24:20.026 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:20.026 Command Effects Log Page: Supported 00:24:20.026 Get Log Page Extended Data: Supported 00:24:20.026 Telemetry Log Pages: Not Supported 00:24:20.026 Persistent Event Log Pages: Not Supported 00:24:20.026 Supported Log Pages Log Page: May Support 00:24:20.026 Commands Supported & Effects Log Page: Not Supported 00:24:20.026 Feature Identifiers & Effects Log Page:May Support 00:24:20.026 NVMe-MI Commands & Effects Log Page: May Support 00:24:20.026 Data Area 4 for Telemetry Log: Not Supported 00:24:20.026 Error Log Page Entries Supported: 128 00:24:20.026 Keep Alive: Supported 00:24:20.026 Keep Alive Granularity: 10000 ms 00:24:20.026 00:24:20.026 NVM Command Set Attributes 00:24:20.026 ========================== 00:24:20.026 Submission Queue Entry Size 00:24:20.026 Max: 64 00:24:20.026 Min: 64 00:24:20.026 Completion Queue Entry Size 00:24:20.026 Max: 16 00:24:20.026 Min: 16 00:24:20.026 Number of Namespaces: 32 00:24:20.026 Compare Command: Supported 00:24:20.026 Write Uncorrectable Command: Not Supported 00:24:20.026 Dataset Management Command: Supported 00:24:20.026 Write Zeroes Command: Supported 00:24:20.026 Set Features Save Field: Not Supported 00:24:20.026 Reservations: Supported 00:24:20.026 Timestamp: Not Supported 00:24:20.026 Copy: Supported 00:24:20.026 Volatile Write Cache: Present 00:24:20.026 Atomic Write Unit (Normal): 1 00:24:20.026 Atomic Write Unit (PFail): 1 00:24:20.026 Atomic Compare & Write Unit: 1 00:24:20.026 Fused Compare & Write: Supported 00:24:20.026 Scatter-Gather List 00:24:20.026 SGL Command Set: Supported 00:24:20.026 SGL Keyed: Supported 00:24:20.026 SGL Bit Bucket Descriptor: Not Supported 00:24:20.026 SGL Metadata Pointer: Not Supported 00:24:20.026 Oversized SGL: Not Supported 00:24:20.026 SGL Metadata Address: Not Supported 00:24:20.026 SGL Offset: Supported 00:24:20.026 Transport SGL Data Block: Not Supported 00:24:20.026 Replay Protected Memory Block: Not Supported 00:24:20.026 00:24:20.026 Firmware Slot Information 00:24:20.026 ========================= 00:24:20.026 Active slot: 1 00:24:20.026 Slot 1 Firmware Revision: 25.01 00:24:20.026 00:24:20.026 00:24:20.026 Commands Supported and Effects 00:24:20.026 ============================== 00:24:20.026 Admin Commands 00:24:20.026 -------------- 00:24:20.026 Get Log Page (02h): Supported 00:24:20.026 Identify (06h): Supported 00:24:20.026 Abort (08h): Supported 00:24:20.026 Set Features (09h): Supported 00:24:20.026 Get Features (0Ah): Supported 00:24:20.027 Asynchronous Event Request (0Ch): Supported 00:24:20.027 Keep Alive (18h): Supported 00:24:20.027 I/O Commands 00:24:20.027 ------------ 00:24:20.027 Flush (00h): Supported LBA-Change 00:24:20.027 Write (01h): Supported LBA-Change 00:24:20.027 Read (02h): Supported 00:24:20.027 Compare (05h): Supported 00:24:20.027 Write Zeroes (08h): Supported LBA-Change 00:24:20.027 Dataset Management (09h): Supported LBA-Change 00:24:20.027 Copy (19h): Supported LBA-Change 00:24:20.027 00:24:20.027 Error Log 00:24:20.027 ========= 00:24:20.027 00:24:20.027 Arbitration 00:24:20.027 =========== 00:24:20.027 Arbitration Burst: 1 00:24:20.027 00:24:20.027 Power Management 00:24:20.027 ================ 00:24:20.027 Number of Power States: 1 00:24:20.027 Current Power State: Power State #0 00:24:20.027 Power State #0: 00:24:20.027 Max Power: 0.00 W 00:24:20.027 Non-Operational State: Operational 00:24:20.027 Entry Latency: Not Reported 00:24:20.027 Exit Latency: Not Reported 00:24:20.027 Relative Read Throughput: 0 00:24:20.027 Relative Read Latency: 0 00:24:20.027 Relative Write Throughput: 0 00:24:20.027 Relative Write Latency: 0 00:24:20.027 Idle Power: Not Reported 00:24:20.027 Active Power: Not Reported 00:24:20.027 Non-Operational Permissive Mode: Not Supported 00:24:20.027 00:24:20.027 Health Information 00:24:20.027 ================== 00:24:20.027 Critical Warnings: 00:24:20.027 Available Spare Space: OK 00:24:20.027 Temperature: OK 00:24:20.027 Device Reliability: OK 00:24:20.027 Read Only: No 00:24:20.027 Volatile Memory Backup: OK 00:24:20.027 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:20.027 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:20.027 Available Spare: 0% 00:24:20.027 Available Spare Threshold: 0% 00:24:20.027 Life Percentage Used:[2024-11-20 11:25:12.579320] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.027 [2024-11-20 11:25:12.579325] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1412690) 00:24:20.027 [2024-11-20 11:25:12.579332] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-11-20 11:25:12.579345] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1474b80, cid 7, qid 0 00:24:20.027 [2024-11-20 11:25:12.579529] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.027 [2024-11-20 11:25:12.579535] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.027 [2024-11-20 11:25:12.579539] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.027 [2024-11-20 11:25:12.579543] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1474b80) on tqpair=0x1412690 00:24:20.027 [2024-11-20 11:25:12.579577] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:24:20.027 [2024-11-20 11:25:12.579587] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1474100) on tqpair=0x1412690 00:24:20.027 [2024-11-20 11:25:12.579593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-11-20 11:25:12.579599] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1474280) on tqpair=0x1412690 00:24:20.027 [2024-11-20 11:25:12.579604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-11-20 11:25:12.579609] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1474400) on tqpair=0x1412690 00:24:20.027 [2024-11-20 11:25:12.579613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-11-20 11:25:12.579618] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1474580) on tqpair=0x1412690 00:24:20.027 [2024-11-20 11:25:12.579623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.027 [2024-11-20 11:25:12.579631] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.027 [2024-11-20 11:25:12.579635] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.027 [2024-11-20 11:25:12.579639] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1412690) 00:24:20.027 [2024-11-20 11:25:12.579646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-11-20 11:25:12.579659] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1474580, cid 3, qid 0 00:24:20.027 [2024-11-20 11:25:12.579848] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.027 [2024-11-20 11:25:12.579854] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.027 [2024-11-20 11:25:12.579858] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.027 [2024-11-20 11:25:12.579862] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1474580) on tqpair=0x1412690 00:24:20.027 [2024-11-20 11:25:12.579868] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.027 [2024-11-20 11:25:12.579872] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.027 [2024-11-20 11:25:12.579876] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1412690) 00:24:20.027 [2024-11-20 11:25:12.579883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.027 [2024-11-20 11:25:12.579897] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1474580, cid 3, qid 0 00:24:20.027 [2024-11-20 11:25:12.580069] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.027 [2024-11-20 11:25:12.580076] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.027 [2024-11-20 11:25:12.580079] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.027 [2024-11-20 11:25:12.580083] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1474580) on tqpair=0x1412690 00:24:20.027 [2024-11-20 11:25:12.580088] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:24:20.027 [2024-11-20 11:25:12.580093] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:24:20.027 [2024-11-20 11:25:12.580102] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.027 [2024-11-20 11:25:12.580106] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.028 [2024-11-20 11:25:12.580109] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1412690) 00:24:20.028 [2024-11-20 11:25:12.580116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-11-20 11:25:12.580127] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1474580, cid 3, qid 0 00:24:20.028 [2024-11-20 11:25:12.580297] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.028 [2024-11-20 11:25:12.580304] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.028 [2024-11-20 11:25:12.580308] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.028 [2024-11-20 11:25:12.580311] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1474580) on tqpair=0x1412690 00:24:20.028 [2024-11-20 11:25:12.580321] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.028 [2024-11-20 11:25:12.580325] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.028 [2024-11-20 11:25:12.580329] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1412690) 00:24:20.028 [2024-11-20 11:25:12.580336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-11-20 11:25:12.580346] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1474580, cid 3, qid 0 00:24:20.028 [2024-11-20 11:25:12.580564] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.028 [2024-11-20 11:25:12.580570] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.028 [2024-11-20 11:25:12.580574] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.028 [2024-11-20 11:25:12.580577] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1474580) on tqpair=0x1412690 00:24:20.028 [2024-11-20 11:25:12.580587] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.028 [2024-11-20 11:25:12.580591] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.028 [2024-11-20 11:25:12.580595] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1412690) 00:24:20.028 [2024-11-20 11:25:12.580601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-11-20 11:25:12.580611] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1474580, cid 3, qid 0 00:24:20.028 [2024-11-20 11:25:12.580857] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.028 [2024-11-20 11:25:12.580863] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.028 [2024-11-20 11:25:12.580867] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.028 [2024-11-20 11:25:12.580870] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1474580) on tqpair=0x1412690 00:24:20.028 [2024-11-20 11:25:12.580880] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.028 [2024-11-20 11:25:12.580884] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.028 [2024-11-20 11:25:12.580888] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1412690) 00:24:20.028 [2024-11-20 11:25:12.580895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-11-20 11:25:12.580910] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1474580, cid 3, qid 0 00:24:20.028 [2024-11-20 11:25:12.581127] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.028 [2024-11-20 11:25:12.581133] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.028 [2024-11-20 11:25:12.581136] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.028 [2024-11-20 11:25:12.581140] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1474580) on tqpair=0x1412690 00:24:20.028 [2024-11-20 11:25:12.581150] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.028 [2024-11-20 11:25:12.581154] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.028 [2024-11-20 11:25:12.585165] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1412690) 00:24:20.028 [2024-11-20 11:25:12.585175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.028 [2024-11-20 11:25:12.585187] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1474580, cid 3, qid 0 00:24:20.028 [2024-11-20 11:25:12.585353] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.028 [2024-11-20 11:25:12.585359] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.028 [2024-11-20 11:25:12.585363] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.028 [2024-11-20 11:25:12.585367] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1474580) on tqpair=0x1412690 00:24:20.028 [2024-11-20 11:25:12.585374] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:24:20.028 0% 00:24:20.028 Data Units Read: 0 00:24:20.028 Data Units Written: 0 00:24:20.028 Host Read Commands: 0 00:24:20.028 Host Write Commands: 0 00:24:20.028 Controller Busy Time: 0 minutes 00:24:20.028 Power Cycles: 0 00:24:20.028 Power On Hours: 0 hours 00:24:20.028 Unsafe Shutdowns: 0 00:24:20.028 Unrecoverable Media Errors: 0 00:24:20.028 Lifetime Error Log Entries: 0 00:24:20.028 Warning Temperature Time: 0 minutes 00:24:20.028 Critical Temperature Time: 0 minutes 00:24:20.028 00:24:20.028 Number of Queues 00:24:20.028 ================ 00:24:20.028 Number of I/O Submission Queues: 127 00:24:20.028 Number of I/O Completion Queues: 127 00:24:20.028 00:24:20.028 Active Namespaces 00:24:20.028 ================= 00:24:20.028 Namespace ID:1 00:24:20.028 Error Recovery Timeout: Unlimited 00:24:20.028 Command Set Identifier: NVM (00h) 00:24:20.028 Deallocate: Supported 00:24:20.028 Deallocated/Unwritten Error: Not Supported 00:24:20.028 Deallocated Read Value: Unknown 00:24:20.028 Deallocate in Write Zeroes: Not Supported 00:24:20.028 Deallocated Guard Field: 0xFFFF 00:24:20.028 Flush: Supported 00:24:20.028 Reservation: Supported 00:24:20.028 Namespace Sharing Capabilities: Multiple Controllers 00:24:20.028 Size (in LBAs): 131072 (0GiB) 00:24:20.028 Capacity (in LBAs): 131072 (0GiB) 00:24:20.028 Utilization (in LBAs): 131072 (0GiB) 00:24:20.028 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:20.028 EUI64: ABCDEF0123456789 00:24:20.028 UUID: 8cc1e108-5f69-40f6-b68e-0d7585b19682 00:24:20.028 Thin Provisioning: Not Supported 00:24:20.028 Per-NS Atomic Units: Yes 00:24:20.028 Atomic Boundary Size (Normal): 0 00:24:20.028 Atomic Boundary Size (PFail): 0 00:24:20.028 Atomic Boundary Offset: 0 00:24:20.028 Maximum Single Source Range Length: 65535 00:24:20.028 Maximum Copy Length: 65535 00:24:20.028 Maximum Source Range Count: 1 00:24:20.028 NGUID/EUI64 Never Reused: No 00:24:20.028 Namespace Write Protected: No 00:24:20.028 Number of LBA Formats: 1 00:24:20.028 Current LBA Format: LBA Format #00 00:24:20.028 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:20.028 00:24:20.028 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:20.028 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:20.028 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.028 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:20.028 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.028 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:20.028 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:20.028 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:20.028 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:24:20.028 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:20.028 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:24:20.028 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:20.028 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:20.028 rmmod nvme_tcp 00:24:20.028 rmmod nvme_fabrics 00:24:20.028 rmmod nvme_keyring 00:24:20.028 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:20.028 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:24:20.028 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:24:20.028 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2829577 ']' 00:24:20.028 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2829577 00:24:20.029 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2829577 ']' 00:24:20.029 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2829577 00:24:20.029 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:24:20.029 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:20.029 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2829577 00:24:20.290 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:20.290 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:20.290 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2829577' 00:24:20.290 killing process with pid 2829577 00:24:20.290 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2829577 00:24:20.290 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2829577 00:24:20.290 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:20.290 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:20.290 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:20.290 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:24:20.290 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:24:20.290 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:20.290 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:24:20.290 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:20.290 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:20.290 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.290 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:20.290 11:25:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.839 11:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:22.839 00:24:22.839 real 0m11.681s 00:24:22.839 user 0m8.589s 00:24:22.839 sys 0m6.201s 00:24:22.839 11:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:22.839 11:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:22.839 ************************************ 00:24:22.839 END TEST nvmf_identify 00:24:22.839 ************************************ 00:24:22.839 11:25:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:22.839 11:25:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:22.839 11:25:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:22.839 11:25:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.839 ************************************ 00:24:22.839 START TEST nvmf_perf 00:24:22.839 ************************************ 00:24:22.839 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:22.839 * Looking for test storage... 00:24:22.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:22.839 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:22.839 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:22.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.840 --rc genhtml_branch_coverage=1 00:24:22.840 --rc genhtml_function_coverage=1 00:24:22.840 --rc genhtml_legend=1 00:24:22.840 --rc geninfo_all_blocks=1 00:24:22.840 --rc geninfo_unexecuted_blocks=1 00:24:22.840 00:24:22.840 ' 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:22.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.840 --rc genhtml_branch_coverage=1 00:24:22.840 --rc genhtml_function_coverage=1 00:24:22.840 --rc genhtml_legend=1 00:24:22.840 --rc geninfo_all_blocks=1 00:24:22.840 --rc geninfo_unexecuted_blocks=1 00:24:22.840 00:24:22.840 ' 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:22.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.840 --rc genhtml_branch_coverage=1 00:24:22.840 --rc genhtml_function_coverage=1 00:24:22.840 --rc genhtml_legend=1 00:24:22.840 --rc geninfo_all_blocks=1 00:24:22.840 --rc geninfo_unexecuted_blocks=1 00:24:22.840 00:24:22.840 ' 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:22.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:22.840 --rc genhtml_branch_coverage=1 00:24:22.840 --rc genhtml_function_coverage=1 00:24:22.840 --rc genhtml_legend=1 00:24:22.840 --rc geninfo_all_blocks=1 00:24:22.840 --rc geninfo_unexecuted_blocks=1 00:24:22.840 00:24:22.840 ' 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:22.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:22.840 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:22.841 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:22.841 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:22.841 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.841 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:22.841 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.841 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:22.841 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:22.841 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:22.841 11:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:30.986 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:30.986 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:30.986 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:30.986 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:30.986 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:30.986 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:24:30.986 00:24:30.986 --- 10.0.0.2 ping statistics --- 00:24:30.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.986 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:30.986 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:30.986 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:24:30.986 00:24:30.986 --- 10.0.0.1 ping statistics --- 00:24:30.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.986 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:30.986 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:30.987 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:30.987 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:30.987 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:30.987 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:30.987 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2833940 00:24:30.987 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2833940 00:24:30.987 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:30.987 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2833940 ']' 00:24:30.987 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.987 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:30.987 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.987 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:30.987 11:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:30.987 [2024-11-20 11:25:22.940355] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:24:30.987 [2024-11-20 11:25:22.940423] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:30.987 [2024-11-20 11:25:23.041361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:30.987 [2024-11-20 11:25:23.094662] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:30.987 [2024-11-20 11:25:23.094714] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:30.987 [2024-11-20 11:25:23.094722] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:30.987 [2024-11-20 11:25:23.094730] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:30.987 [2024-11-20 11:25:23.094737] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:30.987 [2024-11-20 11:25:23.096811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:30.987 [2024-11-20 11:25:23.096977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:30.987 [2024-11-20 11:25:23.097139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.987 [2024-11-20 11:25:23.097139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:31.248 11:25:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:31.248 11:25:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:24:31.248 11:25:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:31.248 11:25:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:31.248 11:25:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:31.248 11:25:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.248 11:25:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:31.248 11:25:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:31.821 11:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:31.821 11:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:31.821 11:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:31.821 11:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:32.082 11:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:32.082 11:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:32.082 11:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:32.082 11:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:32.082 11:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:32.344 [2024-11-20 11:25:24.931089] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.344 11:25:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:32.605 11:25:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:32.605 11:25:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:32.866 11:25:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:32.866 11:25:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:32.867 11:25:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:33.128 [2024-11-20 11:25:25.730849] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:33.128 11:25:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:33.389 11:25:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:33.389 11:25:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:33.389 11:25:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:33.389 11:25:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:34.774 Initializing NVMe Controllers 00:24:34.774 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:34.774 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:34.774 Initialization complete. Launching workers. 00:24:34.774 ======================================================== 00:24:34.774 Latency(us) 00:24:34.774 Device Information : IOPS MiB/s Average min max 00:24:34.774 PCIE (0000:65:00.0) NSID 1 from core 0: 78838.31 307.96 405.04 13.24 7519.53 00:24:34.774 ======================================================== 00:24:34.774 Total : 78838.31 307.96 405.04 13.24 7519.53 00:24:34.774 00:24:34.774 11:25:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:36.217 Initializing NVMe Controllers 00:24:36.218 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:36.218 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:36.218 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:36.218 Initialization complete. Launching workers. 00:24:36.218 ======================================================== 00:24:36.218 Latency(us) 00:24:36.218 Device Information : IOPS MiB/s Average min max 00:24:36.218 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 92.85 0.36 10955.90 234.27 44911.13 00:24:36.218 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 60.90 0.24 16665.52 6986.44 54870.07 00:24:36.218 ======================================================== 00:24:36.218 Total : 153.75 0.60 13217.50 234.27 54870.07 00:24:36.218 00:24:36.218 11:25:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:37.230 Initializing NVMe Controllers 00:24:37.230 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:37.230 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:37.230 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:37.230 Initialization complete. Launching workers. 00:24:37.230 ======================================================== 00:24:37.230 Latency(us) 00:24:37.230 Device Information : IOPS MiB/s Average min max 00:24:37.230 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12503.99 48.84 2562.07 357.66 7955.51 00:24:37.230 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3871.00 15.12 8307.74 5370.52 17149.37 00:24:37.230 ======================================================== 00:24:37.230 Total : 16374.98 63.96 3920.33 357.66 17149.37 00:24:37.230 00:24:37.230 11:25:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:37.230 11:25:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:37.230 11:25:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:39.775 Initializing NVMe Controllers 00:24:39.775 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:39.775 Controller IO queue size 128, less than required. 00:24:39.775 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:39.775 Controller IO queue size 128, less than required. 00:24:39.775 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:39.775 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:39.775 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:39.775 Initialization complete. Launching workers. 00:24:39.775 ======================================================== 00:24:39.775 Latency(us) 00:24:39.775 Device Information : IOPS MiB/s Average min max 00:24:39.775 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1859.55 464.89 69533.71 40542.88 119329.12 00:24:39.775 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 590.56 147.64 222355.26 81466.43 332717.43 00:24:39.775 ======================================================== 00:24:39.775 Total : 2450.11 612.53 106369.06 40542.88 332717.43 00:24:39.775 00:24:39.775 11:25:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:40.035 No valid NVMe controllers or AIO or URING devices found 00:24:40.035 Initializing NVMe Controllers 00:24:40.035 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:40.035 Controller IO queue size 128, less than required. 00:24:40.035 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:40.035 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:40.035 Controller IO queue size 128, less than required. 00:24:40.035 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:40.035 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:40.035 WARNING: Some requested NVMe devices were skipped 00:24:40.035 11:25:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:42.580 Initializing NVMe Controllers 00:24:42.580 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:42.580 Controller IO queue size 128, less than required. 00:24:42.580 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:42.580 Controller IO queue size 128, less than required. 00:24:42.580 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:42.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:42.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:42.580 Initialization complete. Launching workers. 00:24:42.580 00:24:42.580 ==================== 00:24:42.580 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:42.580 TCP transport: 00:24:42.580 polls: 35972 00:24:42.580 idle_polls: 22280 00:24:42.580 sock_completions: 13692 00:24:42.580 nvme_completions: 7459 00:24:42.580 submitted_requests: 11184 00:24:42.580 queued_requests: 1 00:24:42.580 00:24:42.580 ==================== 00:24:42.580 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:42.580 TCP transport: 00:24:42.580 polls: 33959 00:24:42.580 idle_polls: 20310 00:24:42.580 sock_completions: 13649 00:24:42.580 nvme_completions: 7177 00:24:42.580 submitted_requests: 10866 00:24:42.580 queued_requests: 1 00:24:42.580 ======================================================== 00:24:42.580 Latency(us) 00:24:42.581 Device Information : IOPS MiB/s Average min max 00:24:42.581 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1864.47 466.12 70507.62 44540.68 129710.20 00:24:42.581 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1793.97 448.49 72038.77 32305.99 115594.53 00:24:42.581 ======================================================== 00:24:42.581 Total : 3658.43 914.61 71258.44 32305.99 129710.20 00:24:42.581 00:24:42.581 11:25:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:42.581 11:25:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:42.841 11:25:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:42.841 11:25:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:42.841 11:25:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:42.842 11:25:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:42.842 11:25:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:42.842 11:25:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:42.842 11:25:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:42.842 11:25:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:42.842 11:25:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:42.842 rmmod nvme_tcp 00:24:42.842 rmmod nvme_fabrics 00:24:42.842 rmmod nvme_keyring 00:24:42.842 11:25:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:42.842 11:25:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:42.842 11:25:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:42.842 11:25:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2833940 ']' 00:24:42.842 11:25:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2833940 00:24:42.842 11:25:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2833940 ']' 00:24:42.842 11:25:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2833940 00:24:42.842 11:25:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:24:42.842 11:25:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:42.842 11:25:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2833940 00:24:42.842 11:25:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:42.842 11:25:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:42.842 11:25:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2833940' 00:24:42.842 killing process with pid 2833940 00:24:42.842 11:25:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2833940 00:24:42.842 11:25:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2833940 00:24:44.750 11:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:44.750 11:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:44.750 11:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:44.750 11:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:44.750 11:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:24:44.750 11:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:44.750 11:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:24:44.750 11:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:44.750 11:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:44.750 11:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.750 11:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:44.750 11:25:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:47.296 00:24:47.296 real 0m24.413s 00:24:47.296 user 0m59.012s 00:24:47.296 sys 0m8.661s 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:47.296 ************************************ 00:24:47.296 END TEST nvmf_perf 00:24:47.296 ************************************ 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.296 ************************************ 00:24:47.296 START TEST nvmf_fio_host 00:24:47.296 ************************************ 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:47.296 * Looking for test storage... 00:24:47.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:47.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.296 --rc genhtml_branch_coverage=1 00:24:47.296 --rc genhtml_function_coverage=1 00:24:47.296 --rc genhtml_legend=1 00:24:47.296 --rc geninfo_all_blocks=1 00:24:47.296 --rc geninfo_unexecuted_blocks=1 00:24:47.296 00:24:47.296 ' 00:24:47.296 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:47.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.297 --rc genhtml_branch_coverage=1 00:24:47.297 --rc genhtml_function_coverage=1 00:24:47.297 --rc genhtml_legend=1 00:24:47.297 --rc geninfo_all_blocks=1 00:24:47.297 --rc geninfo_unexecuted_blocks=1 00:24:47.297 00:24:47.297 ' 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:47.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.297 --rc genhtml_branch_coverage=1 00:24:47.297 --rc genhtml_function_coverage=1 00:24:47.297 --rc genhtml_legend=1 00:24:47.297 --rc geninfo_all_blocks=1 00:24:47.297 --rc geninfo_unexecuted_blocks=1 00:24:47.297 00:24:47.297 ' 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:47.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.297 --rc genhtml_branch_coverage=1 00:24:47.297 --rc genhtml_function_coverage=1 00:24:47.297 --rc genhtml_legend=1 00:24:47.297 --rc geninfo_all_blocks=1 00:24:47.297 --rc geninfo_unexecuted_blocks=1 00:24:47.297 00:24:47.297 ' 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:47.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:47.297 11:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:55.437 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:55.437 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:55.437 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:55.438 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.438 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:55.438 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.438 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:55.438 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:55.438 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.438 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:55.438 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:55.438 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.438 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:55.438 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.438 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:55.438 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.438 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:55.438 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:55.438 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.438 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:55.438 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:55.438 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.438 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:55.438 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:55.438 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:55.438 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:55.438 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:55.438 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:55.438 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:55.438 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:55.438 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:55.438 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:55.438 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:55.438 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:55.438 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:55.438 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:55.438 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:55.438 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:55.438 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:55.438 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:55.438 11:25:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:55.438 11:25:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:55.438 11:25:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:55.438 11:25:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:55.438 11:25:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:55.438 11:25:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:55.438 11:25:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:55.438 11:25:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:55.438 11:25:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:55.438 11:25:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:55.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:55.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:24:55.438 00:24:55.438 --- 10.0.0.2 ping statistics --- 00:24:55.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.438 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:24:55.438 11:25:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:55.438 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:55.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:24:55.438 00:24:55.438 --- 10.0.0.1 ping statistics --- 00:24:55.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.438 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:24:55.438 11:25:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:55.438 11:25:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:24:55.438 11:25:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:55.438 11:25:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:55.438 11:25:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:55.438 11:25:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:55.438 11:25:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:55.438 11:25:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:55.438 11:25:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:55.438 11:25:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:55.438 11:25:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:55.438 11:25:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:55.438 11:25:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.438 11:25:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2841004 00:24:55.438 11:25:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:55.438 11:25:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:55.438 11:25:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2841004 00:24:55.438 11:25:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2841004 ']' 00:24:55.438 11:25:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.438 11:25:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:55.438 11:25:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.438 11:25:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:55.438 11:25:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.438 [2024-11-20 11:25:47.398333] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:24:55.438 [2024-11-20 11:25:47.398413] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:55.438 [2024-11-20 11:25:47.501861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:55.438 [2024-11-20 11:25:47.555728] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:55.438 [2024-11-20 11:25:47.555780] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:55.438 [2024-11-20 11:25:47.555789] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:55.438 [2024-11-20 11:25:47.555796] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:55.438 [2024-11-20 11:25:47.555802] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:55.438 [2024-11-20 11:25:47.557982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.438 [2024-11-20 11:25:47.558143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:55.438 [2024-11-20 11:25:47.558308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:55.438 [2024-11-20 11:25:47.558309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.700 11:25:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:55.700 11:25:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:24:55.700 11:25:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:55.700 [2024-11-20 11:25:48.395266] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:55.700 11:25:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:55.700 11:25:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:55.700 11:25:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.961 11:25:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:55.961 Malloc1 00:24:56.222 11:25:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:56.222 11:25:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:56.482 11:25:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:56.742 [2024-11-20 11:25:49.266022] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:56.742 11:25:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:57.002 11:25:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:57.002 11:25:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:57.002 11:25:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:57.002 11:25:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:57.002 11:25:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:57.002 11:25:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:57.002 11:25:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:57.002 11:25:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:57.002 11:25:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:57.002 11:25:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:57.002 11:25:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:57.002 11:25:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:57.002 11:25:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:57.002 11:25:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:57.002 11:25:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:57.002 11:25:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:57.002 11:25:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:57.002 11:25:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:57.002 11:25:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:57.002 11:25:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:57.002 11:25:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:57.002 11:25:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:57.002 11:25:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:57.262 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:57.262 fio-3.35 00:24:57.262 Starting 1 thread 00:24:59.805 00:24:59.805 test: (groupid=0, jobs=1): err= 0: pid=2841791: Wed Nov 20 11:25:52 2024 00:24:59.805 read: IOPS=13.8k, BW=53.9MiB/s (56.5MB/s)(108MiB/2004msec) 00:24:59.805 slat (usec): min=2, max=287, avg= 2.16, stdev= 2.46 00:24:59.805 clat (usec): min=3875, max=9132, avg=5107.68, stdev=388.93 00:24:59.805 lat (usec): min=3877, max=9134, avg=5109.84, stdev=389.13 00:24:59.805 clat percentiles (usec): 00:24:59.805 | 1.00th=[ 4293], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:24:59.805 | 30.00th=[ 4948], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5211], 00:24:59.805 | 70.00th=[ 5276], 80.00th=[ 5342], 90.00th=[ 5538], 95.00th=[ 5669], 00:24:59.805 | 99.00th=[ 5997], 99.50th=[ 6980], 99.90th=[ 8455], 99.95th=[ 8455], 00:24:59.805 | 99.99th=[ 9110] 00:24:59.805 bw ( KiB/s): min=53496, max=55800, per=99.93%, avg=55164.00, stdev=1117.19, samples=4 00:24:59.805 iops : min=13374, max=13950, avg=13791.00, stdev=279.30, samples=4 00:24:59.805 write: IOPS=13.8k, BW=53.8MiB/s (56.5MB/s)(108MiB/2004msec); 0 zone resets 00:24:59.805 slat (usec): min=2, max=269, avg= 2.23, stdev= 1.80 00:24:59.805 clat (usec): min=2876, max=8135, avg=4127.08, stdev=340.75 00:24:59.805 lat (usec): min=2894, max=8137, avg=4129.31, stdev=341.04 00:24:59.805 clat percentiles (usec): 00:24:59.805 | 1.00th=[ 3458], 5.00th=[ 3654], 10.00th=[ 3785], 20.00th=[ 3884], 00:24:59.805 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4178], 00:24:59.805 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4424], 95.00th=[ 4555], 00:24:59.805 | 99.00th=[ 5276], 99.50th=[ 5866], 99.90th=[ 6980], 99.95th=[ 7111], 00:24:59.805 | 99.99th=[ 7832] 00:24:59.805 bw ( KiB/s): min=54032, max=55592, per=100.00%, avg=55140.00, stdev=741.16, samples=4 00:24:59.805 iops : min=13508, max=13898, avg=13785.00, stdev=185.29, samples=4 00:24:59.805 lat (msec) : 4=16.78%, 10=83.22% 00:24:59.805 cpu : usr=79.28%, sys=20.17%, ctx=23, majf=0, minf=17 00:24:59.805 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:59.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:59.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:59.805 issued rwts: total=27656,27623,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:59.805 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:59.805 00:24:59.805 Run status group 0 (all jobs): 00:24:59.805 READ: bw=53.9MiB/s (56.5MB/s), 53.9MiB/s-53.9MiB/s (56.5MB/s-56.5MB/s), io=108MiB (113MB), run=2004-2004msec 00:24:59.805 WRITE: bw=53.8MiB/s (56.5MB/s), 53.8MiB/s-53.8MiB/s (56.5MB/s-56.5MB/s), io=108MiB (113MB), run=2004-2004msec 00:24:59.805 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:59.805 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:59.805 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:59.805 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:59.805 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:59.805 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:59.805 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:59.805 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:59.805 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:59.805 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:59.805 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:59.805 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:59.805 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:59.805 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:59.805 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:59.805 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:59.805 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:59.805 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:59.805 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:59.805 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:59.805 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:59.805 11:25:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:00.066 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:00.066 fio-3.35 00:25:00.066 Starting 1 thread 00:25:02.623 00:25:02.623 test: (groupid=0, jobs=1): err= 0: pid=2842371: Wed Nov 20 11:25:55 2024 00:25:02.623 read: IOPS=9511, BW=149MiB/s (156MB/s)(298MiB/2004msec) 00:25:02.623 slat (usec): min=3, max=114, avg= 3.59, stdev= 1.59 00:25:02.623 clat (usec): min=2460, max=16981, avg=8213.34, stdev=1917.71 00:25:02.623 lat (usec): min=2463, max=16984, avg=8216.93, stdev=1917.81 00:25:02.623 clat percentiles (usec): 00:25:02.623 | 1.00th=[ 4228], 5.00th=[ 5342], 10.00th=[ 5800], 20.00th=[ 6456], 00:25:02.623 | 30.00th=[ 7046], 40.00th=[ 7570], 50.00th=[ 8094], 60.00th=[ 8717], 00:25:02.623 | 70.00th=[ 9372], 80.00th=[10028], 90.00th=[10814], 95.00th=[11338], 00:25:02.623 | 99.00th=[12780], 99.50th=[13042], 99.90th=[13435], 99.95th=[13435], 00:25:02.623 | 99.99th=[13960] 00:25:02.623 bw ( KiB/s): min=71360, max=81312, per=49.06%, avg=74672.00, stdev=4506.17, samples=4 00:25:02.623 iops : min= 4460, max= 5082, avg=4667.00, stdev=281.64, samples=4 00:25:02.623 write: IOPS=5455, BW=85.2MiB/s (89.4MB/s)(153MiB/1790msec); 0 zone resets 00:25:02.623 slat (usec): min=39, max=327, avg=40.83, stdev= 6.91 00:25:02.624 clat (usec): min=1253, max=14882, avg=9111.11, stdev=1347.25 00:25:02.624 lat (usec): min=1293, max=14922, avg=9151.95, stdev=1348.63 00:25:02.624 clat percentiles (usec): 00:25:02.624 | 1.00th=[ 5997], 5.00th=[ 7177], 10.00th=[ 7504], 20.00th=[ 8029], 00:25:02.624 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9372], 00:25:02.624 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10814], 95.00th=[11338], 00:25:02.624 | 99.00th=[12518], 99.50th=[13042], 99.90th=[14091], 99.95th=[14615], 00:25:02.624 | 99.99th=[14877] 00:25:02.624 bw ( KiB/s): min=75040, max=84288, per=89.16%, avg=77824.00, stdev=4346.58, samples=4 00:25:02.624 iops : min= 4690, max= 5268, avg=4864.00, stdev=271.66, samples=4 00:25:02.624 lat (msec) : 2=0.01%, 4=0.59%, 10=78.21%, 20=21.19% 00:25:02.624 cpu : usr=84.42%, sys=14.08%, ctx=15, majf=0, minf=25 00:25:02.624 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:02.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:02.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:02.624 issued rwts: total=19062,9765,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:02.624 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:02.624 00:25:02.624 Run status group 0 (all jobs): 00:25:02.624 READ: bw=149MiB/s (156MB/s), 149MiB/s-149MiB/s (156MB/s-156MB/s), io=298MiB (312MB), run=2004-2004msec 00:25:02.624 WRITE: bw=85.2MiB/s (89.4MB/s), 85.2MiB/s-85.2MiB/s (89.4MB/s-89.4MB/s), io=153MiB (160MB), run=1790-1790msec 00:25:02.624 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:02.624 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:02.624 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:02.624 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:02.624 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:02.624 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:02.624 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:25:02.624 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:02.624 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:25:02.624 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:02.624 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:02.624 rmmod nvme_tcp 00:25:02.624 rmmod nvme_fabrics 00:25:02.624 rmmod nvme_keyring 00:25:02.624 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:02.624 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:25:02.624 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:25:02.624 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2841004 ']' 00:25:02.624 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2841004 00:25:02.624 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2841004 ']' 00:25:02.624 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2841004 00:25:02.624 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:25:02.624 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:02.624 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2841004 00:25:02.884 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:02.884 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:02.884 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2841004' 00:25:02.884 killing process with pid 2841004 00:25:02.884 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2841004 00:25:02.884 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2841004 00:25:02.884 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:02.884 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:02.884 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:02.884 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:25:02.884 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:25:02.884 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:02.884 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:02.884 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:02.884 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:02.884 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.884 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:02.884 11:25:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:05.428 00:25:05.428 real 0m17.967s 00:25:05.428 user 1m6.630s 00:25:05.428 sys 0m7.560s 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.428 ************************************ 00:25:05.428 END TEST nvmf_fio_host 00:25:05.428 ************************************ 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.428 ************************************ 00:25:05.428 START TEST nvmf_failover 00:25:05.428 ************************************ 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:05.428 * Looking for test storage... 00:25:05.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:05.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.428 --rc genhtml_branch_coverage=1 00:25:05.428 --rc genhtml_function_coverage=1 00:25:05.428 --rc genhtml_legend=1 00:25:05.428 --rc geninfo_all_blocks=1 00:25:05.428 --rc geninfo_unexecuted_blocks=1 00:25:05.428 00:25:05.428 ' 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:05.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.428 --rc genhtml_branch_coverage=1 00:25:05.428 --rc genhtml_function_coverage=1 00:25:05.428 --rc genhtml_legend=1 00:25:05.428 --rc geninfo_all_blocks=1 00:25:05.428 --rc geninfo_unexecuted_blocks=1 00:25:05.428 00:25:05.428 ' 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:05.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.428 --rc genhtml_branch_coverage=1 00:25:05.428 --rc genhtml_function_coverage=1 00:25:05.428 --rc genhtml_legend=1 00:25:05.428 --rc geninfo_all_blocks=1 00:25:05.428 --rc geninfo_unexecuted_blocks=1 00:25:05.428 00:25:05.428 ' 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:05.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.428 --rc genhtml_branch_coverage=1 00:25:05.428 --rc genhtml_function_coverage=1 00:25:05.428 --rc genhtml_legend=1 00:25:05.428 --rc geninfo_all_blocks=1 00:25:05.428 --rc geninfo_unexecuted_blocks=1 00:25:05.428 00:25:05.428 ' 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.428 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:05.429 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.429 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:25:05.429 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:05.429 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:05.429 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:05.429 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:05.429 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:05.429 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:05.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:05.429 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:05.429 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:05.429 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:05.429 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:05.429 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:05.429 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:05.429 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:05.429 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:05.429 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:05.429 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:05.429 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:05.429 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:05.429 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:05.429 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.429 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:05.429 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.429 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:05.429 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:05.429 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:25:05.429 11:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:13.570 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:13.570 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:13.570 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:13.570 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:13.570 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:13.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:13.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:25:13.571 00:25:13.571 --- 10.0.0.2 ping statistics --- 00:25:13.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.571 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:13.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:13.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:25:13.571 00:25:13.571 --- 10.0.0.1 ping statistics --- 00:25:13.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.571 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2847029 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2847029 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2847029 ']' 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:13.571 11:26:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:13.571 [2024-11-20 11:26:05.489430] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:25:13.571 [2024-11-20 11:26:05.489492] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:13.571 [2024-11-20 11:26:05.588254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:13.571 [2024-11-20 11:26:05.639808] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:13.571 [2024-11-20 11:26:05.639860] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:13.571 [2024-11-20 11:26:05.639869] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:13.571 [2024-11-20 11:26:05.639876] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:13.571 [2024-11-20 11:26:05.639883] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:13.571 [2024-11-20 11:26:05.641965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:13.571 [2024-11-20 11:26:05.642126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.571 [2024-11-20 11:26:05.642127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:13.571 11:26:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:13.571 11:26:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:13.571 11:26:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:13.571 11:26:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:13.571 11:26:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:13.832 11:26:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:13.832 11:26:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:13.832 [2024-11-20 11:26:06.518202] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:13.832 11:26:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:14.092 Malloc0 00:25:14.093 11:26:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:14.353 11:26:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:14.614 11:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:14.614 [2024-11-20 11:26:07.329199] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:14.875 11:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:14.875 [2024-11-20 11:26:07.521788] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:14.875 11:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:15.137 [2024-11-20 11:26:07.718526] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:15.137 11:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2847540 00:25:15.137 11:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:15.137 11:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:15.137 11:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2847540 /var/tmp/bdevperf.sock 00:25:15.137 11:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2847540 ']' 00:25:15.137 11:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:15.137 11:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:15.137 11:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:15.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:15.137 11:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:15.137 11:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:16.082 11:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:16.082 11:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:16.082 11:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:16.343 NVMe0n1 00:25:16.343 11:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:16.603 00:25:16.603 11:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2847745 00:25:16.603 11:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:16.603 11:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:17.546 11:26:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:17.807 [2024-11-20 11:26:10.360102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.807 [2024-11-20 11:26:10.360341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 [2024-11-20 11:26:10.360705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab74f0 is same with the state(6) to be set 00:25:17.808 11:26:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:21.112 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:21.112 00:25:21.112 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:21.112 [2024-11-20 11:26:13.820625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.112 [2024-11-20 11:26:13.820838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.820843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.820847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.820852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.820858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.820862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.820867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.820871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.820876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.820880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.820884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.820889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.820894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.820899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.820903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.820907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.820912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.820916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.820921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.820925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.820930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.820934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.820938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.820943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.820947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.820952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.820957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.820963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.820968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.820973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.820977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.820983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.820988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.820992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.820997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.821001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.821006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.821010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.821015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.113 [2024-11-20 11:26:13.821019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab8040 is same with the state(6) to be set 00:25:21.374 11:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:24.673 11:26:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:24.673 [2024-11-20 11:26:17.008426] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:24.673 11:26:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:25.616 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:25.616 [2024-11-20 11:26:18.200577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.616 [2024-11-20 11:26:18.200871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.617 [2024-11-20 11:26:18.200876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.617 [2024-11-20 11:26:18.200880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.617 [2024-11-20 11:26:18.200884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.617 [2024-11-20 11:26:18.200889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.617 [2024-11-20 11:26:18.200894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.617 [2024-11-20 11:26:18.200899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.617 [2024-11-20 11:26:18.200903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.617 [2024-11-20 11:26:18.200908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.617 [2024-11-20 11:26:18.200912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.617 [2024-11-20 11:26:18.200917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.617 [2024-11-20 11:26:18.200923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197d4c0 is same with the state(6) to be set 00:25:25.617 11:26:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2847745 00:25:32.214 { 00:25:32.214 "results": [ 00:25:32.214 { 00:25:32.214 "job": "NVMe0n1", 00:25:32.214 "core_mask": "0x1", 00:25:32.214 "workload": "verify", 00:25:32.214 "status": "finished", 00:25:32.214 "verify_range": { 00:25:32.214 "start": 0, 00:25:32.214 "length": 16384 00:25:32.214 }, 00:25:32.214 "queue_depth": 128, 00:25:32.214 "io_size": 4096, 00:25:32.214 "runtime": 15.008122, 00:25:32.214 "iops": 12418.34254812161, 00:25:32.214 "mibps": 48.50915057860004, 00:25:32.214 "io_failed": 6869, 00:25:32.214 "io_timeout": 0, 00:25:32.214 "avg_latency_us": 9920.42344533278, 00:25:32.214 "min_latency_us": 535.8933333333333, 00:25:32.214 "max_latency_us": 21408.426666666666 00:25:32.214 } 00:25:32.214 ], 00:25:32.214 "core_count": 1 00:25:32.214 } 00:25:32.214 11:26:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2847540 00:25:32.214 11:26:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2847540 ']' 00:25:32.214 11:26:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2847540 00:25:32.214 11:26:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:32.214 11:26:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:32.214 11:26:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2847540 00:25:32.214 11:26:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:32.214 11:26:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:32.214 11:26:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2847540' 00:25:32.214 killing process with pid 2847540 00:25:32.214 11:26:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2847540 00:25:32.214 11:26:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2847540 00:25:32.214 11:26:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:32.214 [2024-11-20 11:26:07.804967] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:25:32.214 [2024-11-20 11:26:07.805048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2847540 ] 00:25:32.214 [2024-11-20 11:26:07.898886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.214 [2024-11-20 11:26:07.952383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.214 Running I/O for 15 seconds... 00:25:32.214 10958.00 IOPS, 42.80 MiB/s [2024-11-20T10:26:24.956Z] [2024-11-20 11:26:10.362374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:94512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.214 [2024-11-20 11:26:10.362408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.214 [2024-11-20 11:26:10.362425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.214 [2024-11-20 11:26:10.362433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.214 [2024-11-20 11:26:10.362443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:94528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.214 [2024-11-20 11:26:10.362450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.214 [2024-11-20 11:26:10.362460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.214 [2024-11-20 11:26:10.362468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.214 [2024-11-20 11:26:10.362477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.214 [2024-11-20 11:26:10.362484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.214 [2024-11-20 11:26:10.362494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.214 [2024-11-20 11:26:10.362501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.214 [2024-11-20 11:26:10.362511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:94560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.362518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.362527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.362535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.362544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.362551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.362561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.362568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.362577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.362585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.362600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.362608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.362618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.362625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.362634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.362642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.362651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.362658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.362668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:94632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.362675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.362685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.362692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.362701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.362708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.362718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.362726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.362735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.362742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.362751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.362759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.362768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.362776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.362785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.362792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.362801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.362811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.362822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.362830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.362839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.362846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.362855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.362863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.362872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:94728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.362879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.362889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.362896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.362905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.362912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.362922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.362929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.362938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.362946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.362955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.362964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.362973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.362980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.362991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.362998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.363008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.363015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.363026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.363033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.363043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.363050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.363059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.363067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.363077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.363084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.363093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.363100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.363110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.363117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.363127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.363134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.363144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.363151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.363164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.363171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.215 [2024-11-20 11:26:10.363181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.215 [2024-11-20 11:26:10.363188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.216 [2024-11-20 11:26:10.363205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.216 [2024-11-20 11:26:10.363221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.216 [2024-11-20 11:26:10.363239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.216 [2024-11-20 11:26:10.363256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.216 [2024-11-20 11:26:10.363272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.216 [2024-11-20 11:26:10.363289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.216 [2024-11-20 11:26:10.363305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.216 [2024-11-20 11:26:10.363322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.216 [2024-11-20 11:26:10.363339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.216 [2024-11-20 11:26:10.363356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.216 [2024-11-20 11:26:10.363372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.216 [2024-11-20 11:26:10.363389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.216 [2024-11-20 11:26:10.363405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.216 [2024-11-20 11:26:10.363422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.216 [2024-11-20 11:26:10.363439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.216 [2024-11-20 11:26:10.363460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.216 [2024-11-20 11:26:10.363476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.216 [2024-11-20 11:26:10.363493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.216 [2024-11-20 11:26:10.363510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.216 [2024-11-20 11:26:10.363527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.216 [2024-11-20 11:26:10.363543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.216 [2024-11-20 11:26:10.363560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.216 [2024-11-20 11:26:10.363576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.216 [2024-11-20 11:26:10.363593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.216 [2024-11-20 11:26:10.363609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.216 [2024-11-20 11:26:10.363626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.216 [2024-11-20 11:26:10.363642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.216 [2024-11-20 11:26:10.363658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.216 [2024-11-20 11:26:10.363676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.216 [2024-11-20 11:26:10.363692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.216 [2024-11-20 11:26:10.363709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.216 [2024-11-20 11:26:10.363725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.216 [2024-11-20 11:26:10.363742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.216 [2024-11-20 11:26:10.363758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.216 [2024-11-20 11:26:10.363774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.216 [2024-11-20 11:26:10.363790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.216 [2024-11-20 11:26:10.363806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.216 [2024-11-20 11:26:10.363822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.216 [2024-11-20 11:26:10.363840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.216 [2024-11-20 11:26:10.363849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.363856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.363865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.363873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.363883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.363890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.363899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.363906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.363915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.363922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.363932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.363939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.363948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.363955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.363964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.363972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.363981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.363988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.363997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.364005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.364014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.364021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.364030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.364038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.364047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.364054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.364064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.364071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.364082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.364089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.364099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.364107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.364116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.364123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.364133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.364140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.364149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.364156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.364169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.364177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.364186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.364193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.364202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.364209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.364219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.364226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.364235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.364242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.364252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.364259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.364268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.364275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.364284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.364291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.364303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.364310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.364319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.364326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.364335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.364342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.364352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.364359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.364368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.364375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.364384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.364391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.364401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.364408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.364418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.364425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.364434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.364441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.364450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.364457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.364467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.364474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.364484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.364491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.217 [2024-11-20 11:26:10.364500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.217 [2024-11-20 11:26:10.364509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.218 [2024-11-20 11:26:10.364518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.218 [2024-11-20 11:26:10.364525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.218 [2024-11-20 11:26:10.364534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.218 [2024-11-20 11:26:10.364541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.218 [2024-11-20 11:26:10.364567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.218 [2024-11-20 11:26:10.364574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.218 [2024-11-20 11:26:10.364581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95528 len:8 PRP1 0x0 PRP2 0x0 00:25:32.218 [2024-11-20 11:26:10.364589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.218 [2024-11-20 11:26:10.364631] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:32.218 [2024-11-20 11:26:10.364652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.218 [2024-11-20 11:26:10.364660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.218 [2024-11-20 11:26:10.364669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.218 [2024-11-20 11:26:10.364676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.218 [2024-11-20 11:26:10.364684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.218 [2024-11-20 11:26:10.364691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.218 [2024-11-20 11:26:10.364699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.218 [2024-11-20 11:26:10.364707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.218 [2024-11-20 11:26:10.364715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:32.218 [2024-11-20 11:26:10.364753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb80d70 (9): Bad file descriptor 00:25:32.218 [2024-11-20 11:26:10.368298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:32.218 [2024-11-20 11:26:10.397790] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:25:32.218 10962.00 IOPS, 42.82 MiB/s [2024-11-20T10:26:24.960Z] 11153.67 IOPS, 43.57 MiB/s [2024-11-20T10:26:24.960Z] 11524.00 IOPS, 45.02 MiB/s [2024-11-20T10:26:24.960Z] [2024-11-20 11:26:13.823272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.218 [2024-11-20 11:26:13.823301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.218 [2024-11-20 11:26:13.823313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.218 [2024-11-20 11:26:13.823319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.218 [2024-11-20 11:26:13.823329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.218 [2024-11-20 11:26:13.823335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.218 [2024-11-20 11:26:13.823342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.218 [2024-11-20 11:26:13.823347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.218 [2024-11-20 11:26:13.823353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.218 [2024-11-20 11:26:13.823359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.218 [2024-11-20 11:26:13.823365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:39336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.218 [2024-11-20 11:26:13.823370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.218 [2024-11-20 11:26:13.823377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:39344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.218 [2024-11-20 11:26:13.823382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.218 [2024-11-20 11:26:13.823389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.218 [2024-11-20 11:26:13.823394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.218 [2024-11-20 11:26:13.823401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.218 [2024-11-20 11:26:13.823406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.218 [2024-11-20 11:26:13.823412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.218 [2024-11-20 11:26:13.823417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.218 [2024-11-20 11:26:13.823424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.218 [2024-11-20 11:26:13.823429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.218 [2024-11-20 11:26:13.823435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.218 [2024-11-20 11:26:13.823440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.218 [2024-11-20 11:26:13.823447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.218 [2024-11-20 11:26:13.823452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.218 [2024-11-20 11:26:13.823458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.218 [2024-11-20 11:26:13.823463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.218 [2024-11-20 11:26:13.823470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.218 [2024-11-20 11:26:13.823476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.218 [2024-11-20 11:26:13.823482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.218 [2024-11-20 11:26:13.823487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.218 [2024-11-20 11:26:13.823494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.218 [2024-11-20 11:26:13.823499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.218 [2024-11-20 11:26:13.823506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.218 [2024-11-20 11:26:13.823512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.218 [2024-11-20 11:26:13.823518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.218 [2024-11-20 11:26:13.823523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.218 [2024-11-20 11:26:13.823529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.218 [2024-11-20 11:26:13.823535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.218 [2024-11-20 11:26:13.823541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.218 [2024-11-20 11:26:13.823546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.218 [2024-11-20 11:26:13.823553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.218 [2024-11-20 11:26:13.823558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.218 [2024-11-20 11:26:13.823564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.218 [2024-11-20 11:26:13.823569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.218 [2024-11-20 11:26:13.823575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.218 [2024-11-20 11:26:13.823580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.218 [2024-11-20 11:26:13.823587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.218 [2024-11-20 11:26:13.823592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.218 [2024-11-20 11:26:13.823599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.218 [2024-11-20 11:26:13.823604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.218 [2024-11-20 11:26:13.823610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.218 [2024-11-20 11:26:13.823615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.218 [2024-11-20 11:26:13.823622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.218 [2024-11-20 11:26:13.823627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.219 [2024-11-20 11:26:13.823633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.219 [2024-11-20 11:26:13.823638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.219 [2024-11-20 11:26:13.823645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.219 [2024-11-20 11:26:13.823650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.219 [2024-11-20 11:26:13.823656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.219 [2024-11-20 11:26:13.823661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.219 [2024-11-20 11:26:13.823667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.219 [2024-11-20 11:26:13.823672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.219 [2024-11-20 11:26:13.823679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.219 [2024-11-20 11:26:13.823684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.219 [2024-11-20 11:26:13.823691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.219 [2024-11-20 11:26:13.823696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.219 [2024-11-20 11:26:13.823702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.219 [2024-11-20 11:26:13.823707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.219 [2024-11-20 11:26:13.823713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.219 [2024-11-20 11:26:13.823718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.219 [2024-11-20 11:26:13.823725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.219 [2024-11-20 11:26:13.823730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.219 [2024-11-20 11:26:13.823736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.219 [2024-11-20 11:26:13.823741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.219 [2024-11-20 11:26:13.823748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.219 [2024-11-20 11:26:13.823753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.219 [2024-11-20 11:26:13.823759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.219 [2024-11-20 11:26:13.823764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.219 [2024-11-20 11:26:13.823772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.219 [2024-11-20 11:26:13.823777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.219 [2024-11-20 11:26:13.823783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.219 [2024-11-20 11:26:13.823788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.219 [2024-11-20 11:26:13.823794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.219 [2024-11-20 11:26:13.823799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.219 [2024-11-20 11:26:13.823805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.219 [2024-11-20 11:26:13.823810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.219 [2024-11-20 11:26:13.823817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.219 [2024-11-20 11:26:13.823822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.219 [2024-11-20 11:26:13.823829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.219 [2024-11-20 11:26:13.823834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.219 [2024-11-20 11:26:13.823841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.219 [2024-11-20 11:26:13.823846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.219 [2024-11-20 11:26:13.823852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.219 [2024-11-20 11:26:13.823857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.219 [2024-11-20 11:26:13.823864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.219 [2024-11-20 11:26:13.823869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.219 [2024-11-20 11:26:13.823875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.219 [2024-11-20 11:26:13.823880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.219 [2024-11-20 11:26:13.823886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.219 [2024-11-20 11:26:13.823891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.219 [2024-11-20 11:26:13.823898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.219 [2024-11-20 11:26:13.823903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.219 [2024-11-20 11:26:13.823909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.219 [2024-11-20 11:26:13.823916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.219 [2024-11-20 11:26:13.823922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.219 [2024-11-20 11:26:13.823927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.219 [2024-11-20 11:26:13.823934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.219 [2024-11-20 11:26:13.823938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.219 [2024-11-20 11:26:13.823945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.219 [2024-11-20 11:26:13.823950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.219 [2024-11-20 11:26:13.823956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.219 [2024-11-20 11:26:13.823961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.219 [2024-11-20 11:26:13.823967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.219 [2024-11-20 11:26:13.823972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.219 [2024-11-20 11:26:13.823978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.219 [2024-11-20 11:26:13.823983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.219 [2024-11-20 11:26:13.823990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.219 [2024-11-20 11:26:13.823995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.219 [2024-11-20 11:26:13.824001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.219 [2024-11-20 11:26:13.824006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.219 [2024-11-20 11:26:13.824012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.219 [2024-11-20 11:26:13.824017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.219 [2024-11-20 11:26:13.824023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.220 [2024-11-20 11:26:13.824028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.220 [2024-11-20 11:26:13.824034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.220 [2024-11-20 11:26:13.824040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.220 [2024-11-20 11:26:13.824046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.220 [2024-11-20 11:26:13.824051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.220 [2024-11-20 11:26:13.824059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.220 [2024-11-20 11:26:13.824064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.220 [2024-11-20 11:26:13.824071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.220 [2024-11-20 11:26:13.824076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.220 [2024-11-20 11:26:13.824082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.220 [2024-11-20 11:26:13.824087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.220 [2024-11-20 11:26:13.824094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.220 [2024-11-20 11:26:13.824100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.220 [2024-11-20 11:26:13.824106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.220 [2024-11-20 11:26:13.824111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.220 [2024-11-20 11:26:13.824117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.220 [2024-11-20 11:26:13.824122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.220 [2024-11-20 11:26:13.824129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.220 [2024-11-20 11:26:13.824134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.220 [2024-11-20 11:26:13.824140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.220 [2024-11-20 11:26:13.824146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.220 [2024-11-20 11:26:13.824152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.220 [2024-11-20 11:26:13.824157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.220 [2024-11-20 11:26:13.824167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.220 [2024-11-20 11:26:13.824172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.220 [2024-11-20 11:26:13.824178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.220 [2024-11-20 11:26:13.824183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.220 [2024-11-20 11:26:13.824189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.220 [2024-11-20 11:26:13.824194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.220 [2024-11-20 11:26:13.824201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.220 [2024-11-20 11:26:13.824207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.220 [2024-11-20 11:26:13.824213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.220 [2024-11-20 11:26:13.824218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.220 [2024-11-20 11:26:13.824225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.220 [2024-11-20 11:26:13.824229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.220 [2024-11-20 11:26:13.824236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.220 [2024-11-20 11:26:13.824241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.220 [2024-11-20 11:26:13.824247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.220 [2024-11-20 11:26:13.824252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.220 [2024-11-20 11:26:13.824259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.220 [2024-11-20 11:26:13.824264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.220 [2024-11-20 11:26:13.824270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.220 [2024-11-20 11:26:13.824275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.220 [2024-11-20 11:26:13.824281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.220 [2024-11-20 11:26:13.824286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.220 [2024-11-20 11:26:13.824292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.220 [2024-11-20 11:26:13.824297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.220 [2024-11-20 11:26:13.824303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.220 [2024-11-20 11:26:13.824308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.220 [2024-11-20 11:26:13.824314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.220 [2024-11-20 11:26:13.824319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.220 [2024-11-20 11:26:13.824326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.220 [2024-11-20 11:26:13.824330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.220 [2024-11-20 11:26:13.824337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.220 [2024-11-20 11:26:13.824342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.220 [2024-11-20 11:26:13.824348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.220 [2024-11-20 11:26:13.824354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.220 [2024-11-20 11:26:13.824361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.220 [2024-11-20 11:26:13.824366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.220 [2024-11-20 11:26:13.824372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.220 [2024-11-20 11:26:13.824377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.220 [2024-11-20 11:26:13.824383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.220 [2024-11-20 11:26:13.824388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.220 [2024-11-20 11:26:13.824394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.220 [2024-11-20 11:26:13.824399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.220 [2024-11-20 11:26:13.824406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.220 [2024-11-20 11:26:13.824411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.220 [2024-11-20 11:26:13.824417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.220 [2024-11-20 11:26:13.824424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.220 [2024-11-20 11:26:13.824440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.220 [2024-11-20 11:26:13.824445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40120 len:8 PRP1 0x0 PRP2 0x0 00:25:32.220 [2024-11-20 11:26:13.824450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.220 [2024-11-20 11:26:13.824458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.220 [2024-11-20 11:26:13.824462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.220 [2024-11-20 11:26:13.824466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40128 len:8 PRP1 0x0 PRP2 0x0 00:25:32.220 [2024-11-20 11:26:13.824471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.221 [2024-11-20 11:26:13.824476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.221 [2024-11-20 11:26:13.824480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.221 [2024-11-20 11:26:13.824484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40136 len:8 PRP1 0x0 PRP2 0x0 00:25:32.221 [2024-11-20 11:26:13.824489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.221 [2024-11-20 11:26:13.824494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.221 [2024-11-20 11:26:13.824497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.221 [2024-11-20 11:26:13.824502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40144 len:8 PRP1 0x0 PRP2 0x0 00:25:32.221 [2024-11-20 11:26:13.824507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.221 [2024-11-20 11:26:13.824514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.221 [2024-11-20 11:26:13.824518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.221 [2024-11-20 11:26:13.824522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40152 len:8 PRP1 0x0 PRP2 0x0 00:25:32.221 [2024-11-20 11:26:13.824527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.221 [2024-11-20 11:26:13.824532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.221 [2024-11-20 11:26:13.824536] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.221 [2024-11-20 11:26:13.824540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40160 len:8 PRP1 0x0 PRP2 0x0 00:25:32.221 [2024-11-20 11:26:13.824545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.221 [2024-11-20 11:26:13.824550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.221 [2024-11-20 11:26:13.824554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.221 [2024-11-20 11:26:13.824558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40168 len:8 PRP1 0x0 PRP2 0x0 00:25:32.221 [2024-11-20 11:26:13.824563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.221 [2024-11-20 11:26:13.824568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.221 [2024-11-20 11:26:13.824572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.221 [2024-11-20 11:26:13.824576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40176 len:8 PRP1 0x0 PRP2 0x0 00:25:32.221 [2024-11-20 11:26:13.824581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.221 [2024-11-20 11:26:13.824586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.221 [2024-11-20 11:26:13.824590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.221 [2024-11-20 11:26:13.824594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40184 len:8 PRP1 0x0 PRP2 0x0 00:25:32.221 [2024-11-20 11:26:13.824599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.221 [2024-11-20 11:26:13.824604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.221 [2024-11-20 11:26:13.824608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.221 [2024-11-20 11:26:13.824612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40192 len:8 PRP1 0x0 PRP2 0x0 00:25:32.221 [2024-11-20 11:26:13.824617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.221 [2024-11-20 11:26:13.824622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.221 [2024-11-20 11:26:13.824626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.221 [2024-11-20 11:26:13.824630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40200 len:8 PRP1 0x0 PRP2 0x0 00:25:32.221 [2024-11-20 11:26:13.824635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.221 [2024-11-20 11:26:13.824640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.221 [2024-11-20 11:26:13.824643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.221 [2024-11-20 11:26:13.824647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40208 len:8 PRP1 0x0 PRP2 0x0 00:25:32.221 [2024-11-20 11:26:13.824653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.221 [2024-11-20 11:26:13.824659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.221 [2024-11-20 11:26:13.824662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.221 [2024-11-20 11:26:13.824667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40216 len:8 PRP1 0x0 PRP2 0x0 00:25:32.221 [2024-11-20 11:26:13.824671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.221 [2024-11-20 11:26:13.824677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.221 [2024-11-20 11:26:13.824680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.221 [2024-11-20 11:26:13.824685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40224 len:8 PRP1 0x0 PRP2 0x0 00:25:32.221 [2024-11-20 11:26:13.824689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.221 [2024-11-20 11:26:13.824694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.221 [2024-11-20 11:26:13.824698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.221 [2024-11-20 11:26:13.824702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40232 len:8 PRP1 0x0 PRP2 0x0 00:25:32.221 [2024-11-20 11:26:13.824707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.221 [2024-11-20 11:26:13.824712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.221 [2024-11-20 11:26:13.824716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.221 [2024-11-20 11:26:13.824720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40240 len:8 PRP1 0x0 PRP2 0x0 00:25:32.221 [2024-11-20 11:26:13.824725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.221 [2024-11-20 11:26:13.824732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.221 [2024-11-20 11:26:13.824735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.221 [2024-11-20 11:26:13.824739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40248 len:8 PRP1 0x0 PRP2 0x0 00:25:32.221 [2024-11-20 11:26:13.824744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.221 [2024-11-20 11:26:13.824749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.221 [2024-11-20 11:26:13.824753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.221 [2024-11-20 11:26:13.824757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40256 len:8 PRP1 0x0 PRP2 0x0 00:25:32.221 [2024-11-20 11:26:13.824762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.221 [2024-11-20 11:26:13.824767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.221 [2024-11-20 11:26:13.824771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.221 [2024-11-20 11:26:13.824775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40264 len:8 PRP1 0x0 PRP2 0x0 00:25:32.221 [2024-11-20 11:26:13.824780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.221 [2024-11-20 11:26:13.824786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.221 [2024-11-20 11:26:13.824791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.221 [2024-11-20 11:26:13.824795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40272 len:8 PRP1 0x0 PRP2 0x0 00:25:32.221 [2024-11-20 11:26:13.824800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.221 [2024-11-20 11:26:13.824805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.221 [2024-11-20 11:26:13.824809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.221 [2024-11-20 11:26:13.824813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40280 len:8 PRP1 0x0 PRP2 0x0 00:25:32.221 [2024-11-20 11:26:13.824818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.221 [2024-11-20 11:26:13.824823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.221 [2024-11-20 11:26:13.824827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.221 [2024-11-20 11:26:13.836840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40288 len:8 PRP1 0x0 PRP2 0x0 00:25:32.221 [2024-11-20 11:26:13.836863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.221 [2024-11-20 11:26:13.836874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.221 [2024-11-20 11:26:13.836879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.221 [2024-11-20 11:26:13.836884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40296 len:8 PRP1 0x0 PRP2 0x0 00:25:32.221 [2024-11-20 11:26:13.836889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.221 [2024-11-20 11:26:13.836895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.222 [2024-11-20 11:26:13.836898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.222 [2024-11-20 11:26:13.836903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40304 len:8 PRP1 0x0 PRP2 0x0 00:25:32.222 [2024-11-20 11:26:13.836908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.222 [2024-11-20 11:26:13.836913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.222 [2024-11-20 11:26:13.836917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.222 [2024-11-20 11:26:13.836921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40312 len:8 PRP1 0x0 PRP2 0x0 00:25:32.222 [2024-11-20 11:26:13.836926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.222 [2024-11-20 11:26:13.836932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.222 [2024-11-20 11:26:13.836936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.222 [2024-11-20 11:26:13.836940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39352 len:8 PRP1 0x0 PRP2 0x0 00:25:32.222 [2024-11-20 11:26:13.836944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.222 [2024-11-20 11:26:13.836950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.222 [2024-11-20 11:26:13.836954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.222 [2024-11-20 11:26:13.836958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39360 len:8 PRP1 0x0 PRP2 0x0 00:25:32.222 [2024-11-20 11:26:13.836963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.222 [2024-11-20 11:26:13.836972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.222 [2024-11-20 11:26:13.836976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.222 [2024-11-20 11:26:13.836980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39368 len:8 PRP1 0x0 PRP2 0x0 00:25:32.222 [2024-11-20 11:26:13.836985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.222 [2024-11-20 11:26:13.836990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.222 [2024-11-20 11:26:13.836994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.222 [2024-11-20 11:26:13.836998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39376 len:8 PRP1 0x0 PRP2 0x0 00:25:32.222 [2024-11-20 11:26:13.837003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.222 [2024-11-20 11:26:13.837008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.222 [2024-11-20 11:26:13.837012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.222 [2024-11-20 11:26:13.837017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39384 len:8 PRP1 0x0 PRP2 0x0 00:25:32.222 [2024-11-20 11:26:13.837021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.222 [2024-11-20 11:26:13.837027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.222 [2024-11-20 11:26:13.837031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.222 [2024-11-20 11:26:13.837035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39392 len:8 PRP1 0x0 PRP2 0x0 00:25:32.222 [2024-11-20 11:26:13.837040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.222 [2024-11-20 11:26:13.837074] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:32.222 [2024-11-20 11:26:13.837098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.222 [2024-11-20 11:26:13.837104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.222 [2024-11-20 11:26:13.837110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.222 [2024-11-20 11:26:13.837116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.222 [2024-11-20 11:26:13.837122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.222 [2024-11-20 11:26:13.837127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.222 [2024-11-20 11:26:13.837133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.222 [2024-11-20 11:26:13.837138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.222 [2024-11-20 11:26:13.837143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:25:32.222 [2024-11-20 11:26:13.837172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb80d70 (9): Bad file descriptor 00:25:32.222 [2024-11-20 11:26:13.840107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:25:32.222 [2024-11-20 11:26:13.871658] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:25:32.222 11639.60 IOPS, 45.47 MiB/s [2024-11-20T10:26:24.964Z] 11868.17 IOPS, 46.36 MiB/s [2024-11-20T10:26:24.964Z] 12019.14 IOPS, 46.95 MiB/s [2024-11-20T10:26:24.964Z] 12135.75 IOPS, 47.41 MiB/s [2024-11-20T10:26:24.964Z] [2024-11-20 11:26:18.203357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:108976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.222 [2024-11-20 11:26:18.203387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.222 [2024-11-20 11:26:18.203400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:108984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.222 [2024-11-20 11:26:18.203406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.222 [2024-11-20 11:26:18.203414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:108992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.222 [2024-11-20 11:26:18.203419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.222 [2024-11-20 11:26:18.203426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:109000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.222 [2024-11-20 11:26:18.203431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.222 [2024-11-20 11:26:18.203438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:109008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.222 [2024-11-20 11:26:18.203443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.222 [2024-11-20 11:26:18.203449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:109016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.222 [2024-11-20 11:26:18.203455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.222 [2024-11-20 11:26:18.203461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:109024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.222 [2024-11-20 11:26:18.203466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.222 [2024-11-20 11:26:18.203473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:109032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.222 [2024-11-20 11:26:18.203478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.222 [2024-11-20 11:26:18.203484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:109040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.222 [2024-11-20 11:26:18.203489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.222 [2024-11-20 11:26:18.203495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:109048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.222 [2024-11-20 11:26:18.203501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.222 [2024-11-20 11:26:18.203507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.222 [2024-11-20 11:26:18.203512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.222 [2024-11-20 11:26:18.203518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:109064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.222 [2024-11-20 11:26:18.203523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.222 [2024-11-20 11:26:18.203533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:109072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.222 [2024-11-20 11:26:18.203538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.222 [2024-11-20 11:26:18.203545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:109080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.222 [2024-11-20 11:26:18.203549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.222 [2024-11-20 11:26:18.203556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:109088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.222 [2024-11-20 11:26:18.203561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.222 [2024-11-20 11:26:18.203567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.222 [2024-11-20 11:26:18.203572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.222 [2024-11-20 11:26:18.203579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:109104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.222 [2024-11-20 11:26:18.203584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.222 [2024-11-20 11:26:18.203590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:109112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.222 [2024-11-20 11:26:18.203595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.222 [2024-11-20 11:26:18.203602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.222 [2024-11-20 11:26:18.203608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.223 [2024-11-20 11:26:18.203614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:109128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.223 [2024-11-20 11:26:18.203619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.223 [2024-11-20 11:26:18.203626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:109136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.223 [2024-11-20 11:26:18.203631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.223 [2024-11-20 11:26:18.203637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:109144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.223 [2024-11-20 11:26:18.203642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.223 [2024-11-20 11:26:18.203649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.223 [2024-11-20 11:26:18.203654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.223 [2024-11-20 11:26:18.203660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:109160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.223 [2024-11-20 11:26:18.203665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.223 [2024-11-20 11:26:18.203671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:109168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.223 [2024-11-20 11:26:18.203681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.223 [2024-11-20 11:26:18.203687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:109176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.223 [2024-11-20 11:26:18.203692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.223 [2024-11-20 11:26:18.203698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:109184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.223 [2024-11-20 11:26:18.203703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.223 [2024-11-20 11:26:18.203710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:109192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.223 [2024-11-20 11:26:18.203715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.223 [2024-11-20 11:26:18.203721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:109200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.223 [2024-11-20 11:26:18.203726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.223 [2024-11-20 11:26:18.203732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.223 [2024-11-20 11:26:18.203737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.223 [2024-11-20 11:26:18.203744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:109216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.223 [2024-11-20 11:26:18.203749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.223 [2024-11-20 11:26:18.203755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.223 [2024-11-20 11:26:18.203760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.223 [2024-11-20 11:26:18.203766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:109232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.223 [2024-11-20 11:26:18.203772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.223 [2024-11-20 11:26:18.203778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:109240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.223 [2024-11-20 11:26:18.203784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.223 [2024-11-20 11:26:18.203790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:109248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.223 [2024-11-20 11:26:18.203795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.223 [2024-11-20 11:26:18.203802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.223 [2024-11-20 11:26:18.203807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.223 [2024-11-20 11:26:18.203813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:109264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.223 [2024-11-20 11:26:18.203818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.223 [2024-11-20 11:26:18.203825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:109272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.223 [2024-11-20 11:26:18.203831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.223 [2024-11-20 11:26:18.203837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:109280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.223 [2024-11-20 11:26:18.203842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.223 [2024-11-20 11:26:18.203849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:109288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.223 [2024-11-20 11:26:18.203854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.223 [2024-11-20 11:26:18.203861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.223 [2024-11-20 11:26:18.203866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.223 [2024-11-20 11:26:18.203872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:109304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.223 [2024-11-20 11:26:18.203878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.223 [2024-11-20 11:26:18.203884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:109312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.223 [2024-11-20 11:26:18.203889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.223 [2024-11-20 11:26:18.203895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.223 [2024-11-20 11:26:18.203900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.223 [2024-11-20 11:26:18.203907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.223 [2024-11-20 11:26:18.203912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.223 [2024-11-20 11:26:18.203918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.223 [2024-11-20 11:26:18.203924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.223 [2024-11-20 11:26:18.203930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:109344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.223 [2024-11-20 11:26:18.203935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.223 [2024-11-20 11:26:18.203941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:109352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.223 [2024-11-20 11:26:18.203946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.223 [2024-11-20 11:26:18.203952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:109360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.223 [2024-11-20 11:26:18.203957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.223 [2024-11-20 11:26:18.203964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.223 [2024-11-20 11:26:18.203969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.223 [2024-11-20 11:26:18.203978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:109376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.223 [2024-11-20 11:26:18.203983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.223 [2024-11-20 11:26:18.203989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.223 [2024-11-20 11:26:18.203994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.223 [2024-11-20 11:26:18.204000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.223 [2024-11-20 11:26:18.204006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:108928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.224 [2024-11-20 11:26:18.204018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:108936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.224 [2024-11-20 11:26:18.204029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:108944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.224 [2024-11-20 11:26:18.204041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:108952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.224 [2024-11-20 11:26:18.204052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:108960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.224 [2024-11-20 11:26:18.204063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:108968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.224 [2024-11-20 11:26:18.204075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:109400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-11-20 11:26:18.204086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:109408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-11-20 11:26:18.204097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:109416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-11-20 11:26:18.204108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:109424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-11-20 11:26:18.204120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:109432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-11-20 11:26:18.204131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:109440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-11-20 11:26:18.204143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:109448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-11-20 11:26:18.204154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:109456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-11-20 11:26:18.204171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:109464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-11-20 11:26:18.204183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:109472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-11-20 11:26:18.204194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:109480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-11-20 11:26:18.204206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:109488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-11-20 11:26:18.204217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:109496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-11-20 11:26:18.204229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:109504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-11-20 11:26:18.204240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:109512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-11-20 11:26:18.204251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:109520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-11-20 11:26:18.204263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:109528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-11-20 11:26:18.204275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:109536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-11-20 11:26:18.204286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-11-20 11:26:18.204297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:109552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-11-20 11:26:18.204309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:109560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-11-20 11:26:18.204321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:109568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-11-20 11:26:18.204332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-11-20 11:26:18.204343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:109584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-11-20 11:26:18.204355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:109592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-11-20 11:26:18.204366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:109600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-11-20 11:26:18.204377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:109608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-11-20 11:26:18.204389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:109616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-11-20 11:26:18.204400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-11-20 11:26:18.204412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-11-20 11:26:18.204423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:109640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-11-20 11:26:18.204435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:109648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-11-20 11:26:18.204446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.224 [2024-11-20 11:26:18.204467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109656 len:8 PRP1 0x0 PRP2 0x0 00:25:32.224 [2024-11-20 11:26:18.204472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.224 [2024-11-20 11:26:18.204480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.224 [2024-11-20 11:26:18.204484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.225 [2024-11-20 11:26:18.204488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109664 len:8 PRP1 0x0 PRP2 0x0 00:25:32.225 [2024-11-20 11:26:18.204493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.225 [2024-11-20 11:26:18.204498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.225 [2024-11-20 11:26:18.204502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.225 [2024-11-20 11:26:18.204506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109672 len:8 PRP1 0x0 PRP2 0x0 00:25:32.225 [2024-11-20 11:26:18.204511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.225 [2024-11-20 11:26:18.204517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.225 [2024-11-20 11:26:18.204521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.225 [2024-11-20 11:26:18.204526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109680 len:8 PRP1 0x0 PRP2 0x0 00:25:32.225 [2024-11-20 11:26:18.204531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.225 [2024-11-20 11:26:18.204537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.225 [2024-11-20 11:26:18.204541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.225 [2024-11-20 11:26:18.204545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109688 len:8 PRP1 0x0 PRP2 0x0 00:25:32.225 [2024-11-20 11:26:18.204550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.225 [2024-11-20 11:26:18.204555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.225 [2024-11-20 11:26:18.204559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.225 [2024-11-20 11:26:18.204563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109696 len:8 PRP1 0x0 PRP2 0x0 00:25:32.225 [2024-11-20 11:26:18.204569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.225 [2024-11-20 11:26:18.204575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.225 [2024-11-20 11:26:18.204579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.225 [2024-11-20 11:26:18.204583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109704 len:8 PRP1 0x0 PRP2 0x0 00:25:32.225 [2024-11-20 11:26:18.204588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.225 [2024-11-20 11:26:18.204593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.225 [2024-11-20 11:26:18.204597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.225 [2024-11-20 11:26:18.204601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109712 len:8 PRP1 0x0 PRP2 0x0 00:25:32.225 [2024-11-20 11:26:18.204606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.225 [2024-11-20 11:26:18.204611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.225 [2024-11-20 11:26:18.204615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.225 [2024-11-20 11:26:18.204620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109720 len:8 PRP1 0x0 PRP2 0x0 00:25:32.225 [2024-11-20 11:26:18.204625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.225 [2024-11-20 11:26:18.204630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.225 [2024-11-20 11:26:18.204634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.225 [2024-11-20 11:26:18.204638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109728 len:8 PRP1 0x0 PRP2 0x0 00:25:32.225 [2024-11-20 11:26:18.204643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.225 [2024-11-20 11:26:18.204648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.225 [2024-11-20 11:26:18.204652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.225 [2024-11-20 11:26:18.204656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109736 len:8 PRP1 0x0 PRP2 0x0 00:25:32.225 [2024-11-20 11:26:18.204661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.225 [2024-11-20 11:26:18.204666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.225 [2024-11-20 11:26:18.204670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.225 [2024-11-20 11:26:18.204674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109744 len:8 PRP1 0x0 PRP2 0x0 00:25:32.225 [2024-11-20 11:26:18.204679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.225 [2024-11-20 11:26:18.204684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.225 [2024-11-20 11:26:18.204688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.225 [2024-11-20 11:26:18.204692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109752 len:8 PRP1 0x0 PRP2 0x0 00:25:32.225 [2024-11-20 11:26:18.204697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.225 [2024-11-20 11:26:18.204702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.225 [2024-11-20 11:26:18.204706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.225 [2024-11-20 11:26:18.204710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109760 len:8 PRP1 0x0 PRP2 0x0 00:25:32.225 [2024-11-20 11:26:18.204716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.225 [2024-11-20 11:26:18.204721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.225 [2024-11-20 11:26:18.204725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.225 [2024-11-20 11:26:18.204729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109768 len:8 PRP1 0x0 PRP2 0x0 00:25:32.225 [2024-11-20 11:26:18.204734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.225 [2024-11-20 11:26:18.204739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.225 [2024-11-20 11:26:18.204743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.225 [2024-11-20 11:26:18.204748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109776 len:8 PRP1 0x0 PRP2 0x0 00:25:32.225 [2024-11-20 11:26:18.204753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.225 [2024-11-20 11:26:18.204758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.225 [2024-11-20 11:26:18.204762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.225 [2024-11-20 11:26:18.204766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109784 len:8 PRP1 0x0 PRP2 0x0 00:25:32.225 [2024-11-20 11:26:18.204771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.225 [2024-11-20 11:26:18.204776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.225 [2024-11-20 11:26:18.204780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.225 [2024-11-20 11:26:18.204784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109792 len:8 PRP1 0x0 PRP2 0x0 00:25:32.225 [2024-11-20 11:26:18.204789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.225 [2024-11-20 11:26:18.204794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.225 [2024-11-20 11:26:18.204797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.225 [2024-11-20 11:26:18.204802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109800 len:8 PRP1 0x0 PRP2 0x0 00:25:32.225 [2024-11-20 11:26:18.204807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.225 [2024-11-20 11:26:18.204812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.225 [2024-11-20 11:26:18.204816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.225 [2024-11-20 11:26:18.204821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109808 len:8 PRP1 0x0 PRP2 0x0 00:25:32.225 [2024-11-20 11:26:18.204826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.225 [2024-11-20 11:26:18.204831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.225 [2024-11-20 11:26:18.204835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.225 [2024-11-20 11:26:18.204839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109816 len:8 PRP1 0x0 PRP2 0x0 00:25:32.225 [2024-11-20 11:26:18.204844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.225 [2024-11-20 11:26:18.204849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.225 [2024-11-20 11:26:18.204854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.225 [2024-11-20 11:26:18.204858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109824 len:8 PRP1 0x0 PRP2 0x0 00:25:32.225 [2024-11-20 11:26:18.204864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.225 [2024-11-20 11:26:18.204869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.225 [2024-11-20 11:26:18.204873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.225 [2024-11-20 11:26:18.204877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109832 len:8 PRP1 0x0 PRP2 0x0 00:25:32.225 [2024-11-20 11:26:18.204882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.225 [2024-11-20 11:26:18.204887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.225 [2024-11-20 11:26:18.204891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.225 [2024-11-20 11:26:18.204896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109840 len:8 PRP1 0x0 PRP2 0x0 00:25:32.225 [2024-11-20 11:26:18.204901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.225 [2024-11-20 11:26:18.204906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.225 [2024-11-20 11:26:18.204910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.226 [2024-11-20 11:26:18.220069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109848 len:8 PRP1 0x0 PRP2 0x0 00:25:32.226 [2024-11-20 11:26:18.220097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.226 [2024-11-20 11:26:18.220110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.226 [2024-11-20 11:26:18.220118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.226 [2024-11-20 11:26:18.220126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109856 len:8 PRP1 0x0 PRP2 0x0 00:25:32.226 [2024-11-20 11:26:18.220134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.226 [2024-11-20 11:26:18.220142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.226 [2024-11-20 11:26:18.220149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.226 [2024-11-20 11:26:18.220154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109864 len:8 PRP1 0x0 PRP2 0x0 00:25:32.226 [2024-11-20 11:26:18.220177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.226 [2024-11-20 11:26:18.220184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.226 [2024-11-20 11:26:18.220190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.226 [2024-11-20 11:26:18.220195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109872 len:8 PRP1 0x0 PRP2 0x0 00:25:32.226 [2024-11-20 11:26:18.220202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.226 [2024-11-20 11:26:18.220210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.226 [2024-11-20 11:26:18.220215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.226 [2024-11-20 11:26:18.220221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109880 len:8 PRP1 0x0 PRP2 0x0 00:25:32.226 [2024-11-20 11:26:18.220228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.226 [2024-11-20 11:26:18.220240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.226 [2024-11-20 11:26:18.220245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.226 [2024-11-20 11:26:18.220251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109888 len:8 PRP1 0x0 PRP2 0x0 00:25:32.226 [2024-11-20 11:26:18.220258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.226 [2024-11-20 11:26:18.220265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.226 [2024-11-20 11:26:18.220270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.226 [2024-11-20 11:26:18.220276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109896 len:8 PRP1 0x0 PRP2 0x0 00:25:32.226 [2024-11-20 11:26:18.220282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.226 [2024-11-20 11:26:18.220289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.226 [2024-11-20 11:26:18.220295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.226 [2024-11-20 11:26:18.220301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109904 len:8 PRP1 0x0 PRP2 0x0 00:25:32.226 [2024-11-20 11:26:18.220308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.226 [2024-11-20 11:26:18.220315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.226 [2024-11-20 11:26:18.220320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.226 [2024-11-20 11:26:18.220326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109912 len:8 PRP1 0x0 PRP2 0x0 00:25:32.226 [2024-11-20 11:26:18.220333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.226 [2024-11-20 11:26:18.220340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.226 [2024-11-20 11:26:18.220346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.226 [2024-11-20 11:26:18.220351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109920 len:8 PRP1 0x0 PRP2 0x0 00:25:32.226 [2024-11-20 11:26:18.220357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.226 [2024-11-20 11:26:18.220364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.226 [2024-11-20 11:26:18.220370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.226 [2024-11-20 11:26:18.220376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109928 len:8 PRP1 0x0 PRP2 0x0 00:25:32.226 [2024-11-20 11:26:18.220383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.226 [2024-11-20 11:26:18.220391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.226 [2024-11-20 11:26:18.220396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.226 [2024-11-20 11:26:18.220402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109936 len:8 PRP1 0x0 PRP2 0x0 00:25:32.226 [2024-11-20 11:26:18.220409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.226 [2024-11-20 11:26:18.220416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:32.226 [2024-11-20 11:26:18.220421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:32.226 [2024-11-20 11:26:18.220426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109944 len:8 PRP1 0x0 PRP2 0x0 00:25:32.226 [2024-11-20 11:26:18.220434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.226 [2024-11-20 11:26:18.220477] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:32.226 [2024-11-20 11:26:18.220507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.226 [2024-11-20 11:26:18.220516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.226 [2024-11-20 11:26:18.220525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.226 [2024-11-20 11:26:18.220533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.226 [2024-11-20 11:26:18.220541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.226 [2024-11-20 11:26:18.220547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.226 [2024-11-20 11:26:18.220555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.226 [2024-11-20 11:26:18.220562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.226 [2024-11-20 11:26:18.220568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:32.226 [2024-11-20 11:26:18.220596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb80d70 (9): Bad file descriptor 00:25:32.226 [2024-11-20 11:26:18.223845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:32.226 [2024-11-20 11:26:18.291359] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:25:32.226 12109.44 IOPS, 47.30 MiB/s [2024-11-20T10:26:24.968Z] 12206.90 IOPS, 47.68 MiB/s [2024-11-20T10:26:24.968Z] 12252.18 IOPS, 47.86 MiB/s [2024-11-20T10:26:24.968Z] 12306.67 IOPS, 48.07 MiB/s [2024-11-20T10:26:24.968Z] 12344.15 IOPS, 48.22 MiB/s [2024-11-20T10:26:24.968Z] 12387.50 IOPS, 48.39 MiB/s [2024-11-20T10:26:24.968Z] 12416.60 IOPS, 48.50 MiB/s 00:25:32.226 Latency(us) 00:25:32.226 [2024-11-20T10:26:24.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:32.226 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:32.226 Verification LBA range: start 0x0 length 0x4000 00:25:32.226 NVMe0n1 : 15.01 12418.34 48.51 457.69 0.00 9920.42 535.89 21408.43 00:25:32.226 [2024-11-20T10:26:24.968Z] =================================================================================================================== 00:25:32.226 [2024-11-20T10:26:24.968Z] Total : 12418.34 48.51 457.69 0.00 9920.42 535.89 21408.43 00:25:32.226 Received shutdown signal, test time was about 15.000000 seconds 00:25:32.226 00:25:32.226 Latency(us) 00:25:32.226 [2024-11-20T10:26:24.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:32.226 [2024-11-20T10:26:24.968Z] =================================================================================================================== 00:25:32.226 [2024-11-20T10:26:24.968Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:32.226 11:26:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:32.226 11:26:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:32.226 11:26:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:32.226 11:26:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2850741 00:25:32.226 11:26:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2850741 /var/tmp/bdevperf.sock 00:25:32.226 11:26:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:32.226 11:26:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2850741 ']' 00:25:32.226 11:26:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:32.226 11:26:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:32.226 11:26:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:32.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:32.226 11:26:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:32.227 11:26:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:32.799 11:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:32.799 11:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:32.799 11:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:33.059 [2024-11-20 11:26:25.548135] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:33.059 11:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:33.059 [2024-11-20 11:26:25.732590] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:33.059 11:26:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:33.630 NVMe0n1 00:25:33.630 11:26:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:33.891 00:25:33.891 11:26:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:34.152 00:25:34.152 11:26:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:34.152 11:26:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:34.413 11:26:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:34.673 11:26:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:37.972 11:26:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:37.972 11:26:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:37.972 11:26:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2851935 00:25:37.972 11:26:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:37.972 11:26:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2851935 00:25:38.913 { 00:25:38.913 "results": [ 00:25:38.913 { 00:25:38.913 "job": "NVMe0n1", 00:25:38.913 "core_mask": "0x1", 00:25:38.913 "workload": "verify", 00:25:38.913 "status": "finished", 00:25:38.913 "verify_range": { 00:25:38.913 "start": 0, 00:25:38.913 "length": 16384 00:25:38.913 }, 00:25:38.913 "queue_depth": 128, 00:25:38.913 "io_size": 4096, 00:25:38.913 "runtime": 1.002909, 00:25:38.913 "iops": 12825.69006759337, 00:25:38.913 "mibps": 50.1003518265366, 00:25:38.913 "io_failed": 0, 00:25:38.913 "io_timeout": 0, 00:25:38.913 "avg_latency_us": 9946.697730441318, 00:25:38.913 "min_latency_us": 1433.6, 00:25:38.913 "max_latency_us": 15073.28 00:25:38.913 } 00:25:38.913 ], 00:25:38.913 "core_count": 1 00:25:38.913 } 00:25:38.913 11:26:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:38.913 [2024-11-20 11:26:24.586856] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:25:38.913 [2024-11-20 11:26:24.586915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2850741 ] 00:25:38.913 [2024-11-20 11:26:24.672350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.913 [2024-11-20 11:26:24.702002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.914 [2024-11-20 11:26:27.187617] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:38.914 [2024-11-20 11:26:27.187653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:38.914 [2024-11-20 11:26:27.187662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.914 [2024-11-20 11:26:27.187668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:38.914 [2024-11-20 11:26:27.187673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.914 [2024-11-20 11:26:27.187679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:38.914 [2024-11-20 11:26:27.187684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.914 [2024-11-20 11:26:27.187690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:38.914 [2024-11-20 11:26:27.187695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.914 [2024-11-20 11:26:27.187701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:25:38.914 [2024-11-20 11:26:27.187721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:25:38.914 [2024-11-20 11:26:27.187732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18aad70 (9): Bad file descriptor 00:25:38.914 [2024-11-20 11:26:27.279330] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:25:38.914 Running I/O for 1 seconds... 00:25:38.914 12735.00 IOPS, 49.75 MiB/s 00:25:38.914 Latency(us) 00:25:38.914 [2024-11-20T10:26:31.656Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:38.914 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:38.914 Verification LBA range: start 0x0 length 0x4000 00:25:38.914 NVMe0n1 : 1.00 12825.69 50.10 0.00 0.00 9946.70 1433.60 15073.28 00:25:38.914 [2024-11-20T10:26:31.656Z] =================================================================================================================== 00:25:38.914 [2024-11-20T10:26:31.656Z] Total : 12825.69 50.10 0.00 0.00 9946.70 1433.60 15073.28 00:25:38.914 11:26:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:38.914 11:26:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:39.174 11:26:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:39.174 11:26:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:39.174 11:26:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:39.440 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:39.719 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:43.087 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:43.087 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:43.087 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2850741 00:25:43.087 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2850741 ']' 00:25:43.087 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2850741 00:25:43.087 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:43.087 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:43.087 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2850741 00:25:43.087 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:43.087 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:43.087 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2850741' 00:25:43.087 killing process with pid 2850741 00:25:43.087 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2850741 00:25:43.087 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2850741 00:25:43.087 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:43.087 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:43.087 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:43.087 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:43.087 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:43.087 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:43.087 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:25:43.348 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:43.348 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:25:43.348 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:43.348 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:43.348 rmmod nvme_tcp 00:25:43.348 rmmod nvme_fabrics 00:25:43.348 rmmod nvme_keyring 00:25:43.348 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:43.348 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:25:43.348 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:25:43.348 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2847029 ']' 00:25:43.348 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2847029 00:25:43.348 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2847029 ']' 00:25:43.348 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2847029 00:25:43.348 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:43.348 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:43.348 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2847029 00:25:43.348 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:43.348 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:43.348 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2847029' 00:25:43.348 killing process with pid 2847029 00:25:43.348 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2847029 00:25:43.348 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2847029 00:25:43.348 11:26:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:43.348 11:26:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:43.348 11:26:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:43.348 11:26:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:25:43.348 11:26:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:25:43.348 11:26:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:43.348 11:26:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:25:43.348 11:26:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:43.348 11:26:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:43.348 11:26:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.348 11:26:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:43.348 11:26:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:45.894 00:25:45.894 real 0m40.511s 00:25:45.894 user 2m4.491s 00:25:45.894 sys 0m8.823s 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:45.894 ************************************ 00:25:45.894 END TEST nvmf_failover 00:25:45.894 ************************************ 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.894 ************************************ 00:25:45.894 START TEST nvmf_host_discovery 00:25:45.894 ************************************ 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:45.894 * Looking for test storage... 00:25:45.894 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:45.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.894 --rc genhtml_branch_coverage=1 00:25:45.894 --rc genhtml_function_coverage=1 00:25:45.894 --rc genhtml_legend=1 00:25:45.894 --rc geninfo_all_blocks=1 00:25:45.894 --rc geninfo_unexecuted_blocks=1 00:25:45.894 00:25:45.894 ' 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:45.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.894 --rc genhtml_branch_coverage=1 00:25:45.894 --rc genhtml_function_coverage=1 00:25:45.894 --rc genhtml_legend=1 00:25:45.894 --rc geninfo_all_blocks=1 00:25:45.894 --rc geninfo_unexecuted_blocks=1 00:25:45.894 00:25:45.894 ' 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:45.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.894 --rc genhtml_branch_coverage=1 00:25:45.894 --rc genhtml_function_coverage=1 00:25:45.894 --rc genhtml_legend=1 00:25:45.894 --rc geninfo_all_blocks=1 00:25:45.894 --rc geninfo_unexecuted_blocks=1 00:25:45.894 00:25:45.894 ' 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:45.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.894 --rc genhtml_branch_coverage=1 00:25:45.894 --rc genhtml_function_coverage=1 00:25:45.894 --rc genhtml_legend=1 00:25:45.894 --rc geninfo_all_blocks=1 00:25:45.894 --rc geninfo_unexecuted_blocks=1 00:25:45.894 00:25:45.894 ' 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:45.894 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:45.894 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:45.895 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:45.895 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:45.895 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:45.895 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:45.895 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:45.895 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:45.895 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:45.895 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:45.895 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:45.895 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.895 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:45.895 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.895 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:45.895 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:45.895 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:45.895 11:26:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.037 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:54.037 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:54.037 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:54.037 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:54.037 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:54.037 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:54.037 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:54.037 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:54.037 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:54.037 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:54.037 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:54.037 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:54.037 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:54.037 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:54.037 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:54.037 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:54.037 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:54.037 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:54.037 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:54.038 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:54.038 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:54.038 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:54.038 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:54.038 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:54.038 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.702 ms 00:25:54.038 00:25:54.038 --- 10.0.0.2 ping statistics --- 00:25:54.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.038 rtt min/avg/max/mdev = 0.702/0.702/0.702/0.000 ms 00:25:54.038 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:54.038 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:54.038 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:25:54.038 00:25:54.039 --- 10.0.0.1 ping statistics --- 00:25:54.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.039 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:25:54.039 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:54.039 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:25:54.039 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:54.039 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:54.039 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:54.039 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:54.039 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:54.039 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:54.039 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:54.039 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:54.039 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:54.039 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:54.039 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.039 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2857105 00:25:54.039 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2857105 00:25:54.039 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:54.039 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2857105 ']' 00:25:54.039 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.039 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:54.039 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.039 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:54.039 11:26:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.039 [2024-11-20 11:26:46.050516] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:25:54.039 [2024-11-20 11:26:46.050583] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:54.039 [2024-11-20 11:26:46.153644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.039 [2024-11-20 11:26:46.204972] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:54.039 [2024-11-20 11:26:46.205024] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:54.039 [2024-11-20 11:26:46.205033] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:54.039 [2024-11-20 11:26:46.205040] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:54.039 [2024-11-20 11:26:46.205046] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:54.039 [2024-11-20 11:26:46.205785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:54.300 11:26:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:54.300 11:26:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:54.300 11:26:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:54.300 11:26:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:54.300 11:26:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.300 11:26:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:54.300 11:26:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:54.300 11:26:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.300 11:26:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.300 [2024-11-20 11:26:46.915932] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:54.301 11:26:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.301 11:26:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:54.301 11:26:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.301 11:26:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.301 [2024-11-20 11:26:46.928171] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:54.301 11:26:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.301 11:26:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:54.301 11:26:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.301 11:26:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.301 null0 00:25:54.301 11:26:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.301 11:26:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:54.301 11:26:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.301 11:26:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.301 null1 00:25:54.301 11:26:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.301 11:26:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:54.301 11:26:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.301 11:26:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.301 11:26:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.301 11:26:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2857454 00:25:54.301 11:26:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:54.301 11:26:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2857454 /tmp/host.sock 00:25:54.301 11:26:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2857454 ']' 00:25:54.301 11:26:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:54.301 11:26:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:54.301 11:26:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:54.301 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:54.301 11:26:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:54.301 11:26:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.301 [2024-11-20 11:26:47.024351] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:25:54.301 [2024-11-20 11:26:47.024415] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2857454 ] 00:25:54.561 [2024-11-20 11:26:47.117438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.562 [2024-11-20 11:26:47.169989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:55.132 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:55.132 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:55.132 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:55.132 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:55.132 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.132 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.132 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.132 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:55.132 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.132 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.132 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.132 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:55.132 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:55.132 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:55.132 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:55.132 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.132 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:55.132 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.132 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:55.394 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.394 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:55.394 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:55.394 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.394 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:55.394 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.394 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.394 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:55.394 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:55.394 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.394 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:55.394 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:55.394 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.394 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.394 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.394 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:55.394 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:55.394 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.394 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:55.394 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.394 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:55.394 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:55.394 11:26:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.394 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:55.394 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:55.394 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.394 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:55.394 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.394 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:55.394 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.394 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:55.394 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.394 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:55.394 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:55.394 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.394 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.394 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.394 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:55.394 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:55.394 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:55.394 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.394 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:55.394 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.394 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:55.394 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.656 [2024-11-20 11:26:48.187446] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:55.656 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.917 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:25:55.917 11:26:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:56.178 [2024-11-20 11:26:48.905421] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:56.178 [2024-11-20 11:26:48.905453] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:56.178 [2024-11-20 11:26:48.905470] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:56.438 [2024-11-20 11:26:48.993731] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:56.438 [2024-11-20 11:26:49.177067] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:56.699 [2024-11-20 11:26:49.178271] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1af5780:1 started. 00:25:56.699 [2024-11-20 11:26:49.180136] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:56.699 [2024-11-20 11:26:49.180178] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:56.699 [2024-11-20 11:26:49.183931] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1af5780 was disconnected and freed. delete nvme_qpair. 00:25:56.699 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:56.699 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:56.699 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:56.699 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:56.699 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:56.699 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.699 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:56.699 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.699 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:56.699 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.959 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.959 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:56.960 [2024-11-20 11:26:49.632064] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1af5b20:1 started. 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.960 [2024-11-20 11:26:49.635715] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1af5b20 was disconnected and freed. delete nvme_qpair. 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.960 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.220 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.220 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:57.220 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:57.220 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:57.220 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:57.220 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.221 [2024-11-20 11:26:49.735954] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:57.221 [2024-11-20 11:26:49.736500] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:57.221 [2024-11-20 11:26:49.736540] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:57.221 [2024-11-20 11:26:49.823773] bdev_nvme.c:7402:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:57.221 11:26:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:57.221 [2024-11-20 11:26:49.930799] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:25:57.221 [2024-11-20 11:26:49.930855] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:57.221 [2024-11-20 11:26:49.930866] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:57.221 [2024-11-20 11:26:49.930872] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:58.162 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:58.162 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:58.162 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:58.162 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:58.162 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:58.162 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.162 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:58.162 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.162 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:58.426 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.426 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:58.426 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:58.426 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:58.426 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:58.426 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:58.426 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:58.426 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:58.426 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:58.426 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:58.426 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:58.426 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:58.426 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:58.426 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.426 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.426 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.426 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:58.426 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:58.426 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:58.426 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:58.426 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:58.426 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.426 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.426 [2024-11-20 11:26:50.991553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:58.426 [2024-11-20 11:26:50.991599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.426 [2024-11-20 11:26:50.991613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:58.426 [2024-11-20 11:26:50.991621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.426 [2024-11-20 11:26:50.991630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:58.426 [2024-11-20 11:26:50.991638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.426 [2024-11-20 11:26:50.991647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:58.426 [2024-11-20 11:26:50.991655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:58.426 [2024-11-20 11:26:50.991663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5e10 is same with the state(6) to be set 00:25:58.426 [2024-11-20 11:26:50.991741] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:58.426 [2024-11-20 11:26:50.991763] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:58.426 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.426 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:58.426 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:58.426 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:58.426 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:58.426 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:58.426 11:26:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:58.426 [2024-11-20 11:26:51.001539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac5e10 (9): Bad file descriptor 00:25:58.426 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:58.426 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:58.426 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.426 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.426 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:58.426 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:58.426 [2024-11-20 11:26:51.011579] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:58.426 [2024-11-20 11:26:51.011596] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:58.426 [2024-11-20 11:26:51.011602] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:58.426 [2024-11-20 11:26:51.011619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:58.426 [2024-11-20 11:26:51.011644] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:58.426 [2024-11-20 11:26:51.012068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.426 [2024-11-20 11:26:51.012086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac5e10 with addr=10.0.0.2, port=4420 00:25:58.426 [2024-11-20 11:26:51.012096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5e10 is same with the state(6) to be set 00:25:58.426 [2024-11-20 11:26:51.012109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac5e10 (9): Bad file descriptor 00:25:58.426 [2024-11-20 11:26:51.012130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:58.426 [2024-11-20 11:26:51.012141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:58.426 [2024-11-20 11:26:51.012155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:58.426 [2024-11-20 11:26:51.012174] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:58.426 [2024-11-20 11:26:51.012180] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:58.426 [2024-11-20 11:26:51.012186] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:58.426 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.426 [2024-11-20 11:26:51.021676] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:58.426 [2024-11-20 11:26:51.021688] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:58.426 [2024-11-20 11:26:51.021693] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:58.426 [2024-11-20 11:26:51.021698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:58.426 [2024-11-20 11:26:51.021714] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:58.426 [2024-11-20 11:26:51.022049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.426 [2024-11-20 11:26:51.022063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac5e10 with addr=10.0.0.2, port=4420 00:25:58.426 [2024-11-20 11:26:51.022071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5e10 is same with the state(6) to be set 00:25:58.426 [2024-11-20 11:26:51.022083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac5e10 (9): Bad file descriptor 00:25:58.426 [2024-11-20 11:26:51.022101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:58.426 [2024-11-20 11:26:51.022108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:58.426 [2024-11-20 11:26:51.022116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:58.426 [2024-11-20 11:26:51.022123] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:58.426 [2024-11-20 11:26:51.022128] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:58.426 [2024-11-20 11:26:51.022132] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:58.427 [2024-11-20 11:26:51.031745] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:58.427 [2024-11-20 11:26:51.031763] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:58.427 [2024-11-20 11:26:51.031768] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:58.427 [2024-11-20 11:26:51.031773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:58.427 [2024-11-20 11:26:51.031788] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:58.427 [2024-11-20 11:26:51.032101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.427 [2024-11-20 11:26:51.032113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac5e10 with addr=10.0.0.2, port=4420 00:25:58.427 [2024-11-20 11:26:51.032121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5e10 is same with the state(6) to be set 00:25:58.427 [2024-11-20 11:26:51.032133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac5e10 (9): Bad file descriptor 00:25:58.427 [2024-11-20 11:26:51.032144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:58.427 [2024-11-20 11:26:51.032151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:58.427 [2024-11-20 11:26:51.032165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:58.427 [2024-11-20 11:26:51.032172] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:58.427 [2024-11-20 11:26:51.032177] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:58.427 [2024-11-20 11:26:51.032181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:58.427 [2024-11-20 11:26:51.041818] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:58.427 [2024-11-20 11:26:51.041832] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:58.427 [2024-11-20 11:26:51.041835] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:58.427 [2024-11-20 11:26:51.041839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:58.427 [2024-11-20 11:26:51.041852] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:58.427 [2024-11-20 11:26:51.042030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.427 [2024-11-20 11:26:51.042042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac5e10 with addr=10.0.0.2, port=4420 00:25:58.427 [2024-11-20 11:26:51.042048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5e10 is same with the state(6) to be set 00:25:58.427 [2024-11-20 11:26:51.042057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac5e10 (9): Bad file descriptor 00:25:58.427 [2024-11-20 11:26:51.042065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:58.427 [2024-11-20 11:26:51.042070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:58.427 [2024-11-20 11:26:51.042075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:58.427 [2024-11-20 11:26:51.042080] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:58.427 [2024-11-20 11:26:51.042084] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:58.427 [2024-11-20 11:26:51.042087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:58.427 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.427 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:58.427 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:58.427 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:58.427 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:58.427 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:58.427 [2024-11-20 11:26:51.051880] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:58.427 [2024-11-20 11:26:51.051893] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:58.427 were deleted. 00:25:58.427 [2024-11-20 11:26:51.051902] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:58.427 [2024-11-20 11:26:51.051906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:58.427 [2024-11-20 11:26:51.051916] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:58.427 [2024-11-20 11:26:51.052400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.427 [2024-11-20 11:26:51.052453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac5e10 with addr=10.0.0.2, port=4420 00:25:58.427 [2024-11-20 11:26:51.052463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5e10 is same with the state(6) to be set 00:25:58.427 [2024-11-20 11:26:51.052483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac5e10 (9): Bad file descriptor 00:25:58.427 [2024-11-20 11:26:51.052509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:58.427 [2024-11-20 11:26:51.052519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:58.427 [2024-11-20 11:26:51.052526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:58.427 [2024-11-20 11:26:51.052533] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:58.427 [2024-11-20 11:26:51.052537] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:58.427 [2024-11-20 11:26:51.052540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:58.427 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:58.427 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:58.427 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:58.427 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.427 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:58.427 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.427 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:58.427 [2024-11-20 11:26:51.061948] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:58.427 [2024-11-20 11:26:51.061961] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:58.427 [2024-11-20 11:26:51.061964] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:58.427 [2024-11-20 11:26:51.061967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:58.427 [2024-11-20 11:26:51.061991] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:58.427 [2024-11-20 11:26:51.062437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.427 [2024-11-20 11:26:51.062486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac5e10 with addr=10.0.0.2, port=4420 00:25:58.427 [2024-11-20 11:26:51.062495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5e10 is same with the state(6) to be set 00:25:58.427 [2024-11-20 11:26:51.062514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac5e10 (9): Bad file descriptor 00:25:58.427 [2024-11-20 11:26:51.063147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:58.427 [2024-11-20 11:26:51.063169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:58.427 [2024-11-20 11:26:51.063176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:58.427 [2024-11-20 11:26:51.063182] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:58.427 [2024-11-20 11:26:51.063186] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:58.428 [2024-11-20 11:26:51.063189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:58.428 [2024-11-20 11:26:51.072022] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:58.428 [2024-11-20 11:26:51.072033] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:58.428 [2024-11-20 11:26:51.072036] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:58.428 [2024-11-20 11:26:51.072040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:58.428 [2024-11-20 11:26:51.072052] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:58.428 [2024-11-20 11:26:51.072564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.428 [2024-11-20 11:26:51.072609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac5e10 with addr=10.0.0.2, port=4420 00:25:58.428 [2024-11-20 11:26:51.072618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5e10 is same with the state(6) to be set 00:25:58.428 [2024-11-20 11:26:51.072636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac5e10 (9): Bad file descriptor 00:25:58.428 [2024-11-20 11:26:51.072662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:58.428 [2024-11-20 11:26:51.072671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:58.428 [2024-11-20 11:26:51.072680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:58.428 [2024-11-20 11:26:51.072686] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:58.428 [2024-11-20 11:26:51.072690] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:58.428 [2024-11-20 11:26:51.072693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:58.428 [2024-11-20 11:26:51.082084] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:58.428 [2024-11-20 11:26:51.082098] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:58.428 [2024-11-20 11:26:51.082102] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:58.428 [2024-11-20 11:26:51.082110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:58.428 [2024-11-20 11:26:51.082125] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:58.428 [2024-11-20 11:26:51.082424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.428 [2024-11-20 11:26:51.082436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac5e10 with addr=10.0.0.2, port=4420 00:25:58.428 [2024-11-20 11:26:51.082442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5e10 is same with the state(6) to be set 00:25:58.428 [2024-11-20 11:26:51.082451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac5e10 (9): Bad file descriptor 00:25:58.428 [2024-11-20 11:26:51.082459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:58.428 [2024-11-20 11:26:51.082463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:58.428 [2024-11-20 11:26:51.082468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:58.428 [2024-11-20 11:26:51.082473] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:58.428 [2024-11-20 11:26:51.082476] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:58.428 [2024-11-20 11:26:51.082479] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:58.428 [2024-11-20 11:26:51.092156] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:58.428 [2024-11-20 11:26:51.092172] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:58.428 [2024-11-20 11:26:51.092175] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:58.428 [2024-11-20 11:26:51.092179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:58.428 [2024-11-20 11:26:51.092190] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:58.428 [2024-11-20 11:26:51.092487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.428 [2024-11-20 11:26:51.092499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac5e10 with addr=10.0.0.2, port=4420 00:25:58.428 [2024-11-20 11:26:51.092505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5e10 is same with the state(6) to be set 00:25:58.428 [2024-11-20 11:26:51.092512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac5e10 (9): Bad file descriptor 00:25:58.428 [2024-11-20 11:26:51.092519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:58.428 [2024-11-20 11:26:51.092525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:58.428 [2024-11-20 11:26:51.092531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:58.428 [2024-11-20 11:26:51.092536] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:58.428 [2024-11-20 11:26:51.092539] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:58.428 [2024-11-20 11:26:51.092542] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:58.428 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.428 [2024-11-20 11:26:51.102220] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:58.428 [2024-11-20 11:26:51.102234] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:58.428 [2024-11-20 11:26:51.102237] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:58.428 [2024-11-20 11:26:51.102240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:58.428 [2024-11-20 11:26:51.102250] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:58.428 [2024-11-20 11:26:51.102452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.428 [2024-11-20 11:26:51.102461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac5e10 with addr=10.0.0.2, port=4420 00:25:58.428 [2024-11-20 11:26:51.102466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5e10 is same with the state(6) to be set 00:25:58.428 [2024-11-20 11:26:51.102474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac5e10 (9): Bad file descriptor 00:25:58.428 [2024-11-20 11:26:51.102481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:58.428 [2024-11-20 11:26:51.102486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:58.428 [2024-11-20 11:26:51.102491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:58.428 [2024-11-20 11:26:51.102496] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:58.428 [2024-11-20 11:26:51.102499] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:58.428 [2024-11-20 11:26:51.102502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:58.428 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:58.428 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:58.428 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:58.428 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:58.428 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:58.428 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:58.428 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:58.428 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:58.429 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:58.429 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:58.429 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.429 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:58.429 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.429 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:58.429 [2024-11-20 11:26:51.112279] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:58.429 [2024-11-20 11:26:51.112290] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:58.429 [2024-11-20 11:26:51.112293] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:58.429 [2024-11-20 11:26:51.112300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:58.429 [2024-11-20 11:26:51.112310] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:58.429 [2024-11-20 11:26:51.112597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.429 [2024-11-20 11:26:51.112605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac5e10 with addr=10.0.0.2, port=4420 00:25:58.429 [2024-11-20 11:26:51.112610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5e10 is same with the state(6) to be set 00:25:58.429 [2024-11-20 11:26:51.112618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac5e10 (9): Bad file descriptor 00:25:58.429 [2024-11-20 11:26:51.112625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:58.429 [2024-11-20 11:26:51.112630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:58.429 [2024-11-20 11:26:51.112635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:58.429 [2024-11-20 11:26:51.112641] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:58.429 [2024-11-20 11:26:51.112644] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:58.429 [2024-11-20 11:26:51.112648] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:58.429 [2024-11-20 11:26:51.120796] bdev_nvme.c:7265:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:58.429 [2024-11-20 11:26:51.120814] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:58.429 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.429 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:25:58.429 11:26:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:59.812 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:59.812 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:59.812 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:59.812 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:59.812 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:59.812 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.812 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.813 11:26:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.754 [2024-11-20 11:26:53.485305] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:00.754 [2024-11-20 11:26:53.485319] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:00.754 [2024-11-20 11:26:53.485329] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:01.014 [2024-11-20 11:26:53.572585] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:01.274 [2024-11-20 11:26:53.843974] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:26:01.274 [2024-11-20 11:26:53.844717] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1ac3700:1 started. 00:26:01.274 [2024-11-20 11:26:53.846111] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:01.274 [2024-11-20 11:26:53.846134] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:01.274 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.274 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.274 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.275 [2024-11-20 11:26:53.855257] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1ac3700 was disconnected and freed. delete nvme_qpair. 00:26:01.275 request: 00:26:01.275 { 00:26:01.275 "name": "nvme", 00:26:01.275 "trtype": "tcp", 00:26:01.275 "traddr": "10.0.0.2", 00:26:01.275 "adrfam": "ipv4", 00:26:01.275 "trsvcid": "8009", 00:26:01.275 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:01.275 "wait_for_attach": true, 00:26:01.275 "method": "bdev_nvme_start_discovery", 00:26:01.275 "req_id": 1 00:26:01.275 } 00:26:01.275 Got JSON-RPC error response 00:26:01.275 response: 00:26:01.275 { 00:26:01.275 "code": -17, 00:26:01.275 "message": "File exists" 00:26:01.275 } 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.275 request: 00:26:01.275 { 00:26:01.275 "name": "nvme_second", 00:26:01.275 "trtype": "tcp", 00:26:01.275 "traddr": "10.0.0.2", 00:26:01.275 "adrfam": "ipv4", 00:26:01.275 "trsvcid": "8009", 00:26:01.275 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:01.275 "wait_for_attach": true, 00:26:01.275 "method": "bdev_nvme_start_discovery", 00:26:01.275 "req_id": 1 00:26:01.275 } 00:26:01.275 Got JSON-RPC error response 00:26:01.275 response: 00:26:01.275 { 00:26:01.275 "code": -17, 00:26:01.275 "message": "File exists" 00:26:01.275 } 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:01.275 11:26:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:01.275 11:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.537 11:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:01.537 11:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:01.537 11:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.537 11:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:01.537 11:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.537 11:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:01.537 11:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.537 11:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:01.537 11:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.537 11:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:01.537 11:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:01.537 11:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:01.537 11:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:01.537 11:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:01.537 11:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:01.537 11:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:01.537 11:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:01.537 11:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:01.537 11:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.537 11:26:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.478 [2024-11-20 11:26:55.109576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.478 [2024-11-20 11:26:55.109599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b01910 with addr=10.0.0.2, port=8010 00:26:02.478 [2024-11-20 11:26:55.109608] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:02.478 [2024-11-20 11:26:55.109614] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:02.478 [2024-11-20 11:26:55.109618] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:03.419 [2024-11-20 11:26:56.111867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.419 [2024-11-20 11:26:56.111886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac2e70 with addr=10.0.0.2, port=8010 00:26:03.419 [2024-11-20 11:26:56.111894] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:03.419 [2024-11-20 11:26:56.111899] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:03.419 [2024-11-20 11:26:56.111903] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:04.801 [2024-11-20 11:26:57.113912] bdev_nvme.c:7521:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:04.801 request: 00:26:04.802 { 00:26:04.802 "name": "nvme_second", 00:26:04.802 "trtype": "tcp", 00:26:04.802 "traddr": "10.0.0.2", 00:26:04.802 "adrfam": "ipv4", 00:26:04.802 "trsvcid": "8010", 00:26:04.802 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:04.802 "wait_for_attach": false, 00:26:04.802 "attach_timeout_ms": 3000, 00:26:04.802 "method": "bdev_nvme_start_discovery", 00:26:04.802 "req_id": 1 00:26:04.802 } 00:26:04.802 Got JSON-RPC error response 00:26:04.802 response: 00:26:04.802 { 00:26:04.802 "code": -110, 00:26:04.802 "message": "Connection timed out" 00:26:04.802 } 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2857454 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:04.802 rmmod nvme_tcp 00:26:04.802 rmmod nvme_fabrics 00:26:04.802 rmmod nvme_keyring 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2857105 ']' 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2857105 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2857105 ']' 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2857105 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2857105 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2857105' 00:26:04.802 killing process with pid 2857105 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2857105 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2857105 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:04.802 11:26:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:07.346 00:26:07.346 real 0m21.251s 00:26:07.346 user 0m25.360s 00:26:07.346 sys 0m7.371s 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:07.346 ************************************ 00:26:07.346 END TEST nvmf_host_discovery 00:26:07.346 ************************************ 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.346 ************************************ 00:26:07.346 START TEST nvmf_host_multipath_status 00:26:07.346 ************************************ 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:07.346 * Looking for test storage... 00:26:07.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:07.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.346 --rc genhtml_branch_coverage=1 00:26:07.346 --rc genhtml_function_coverage=1 00:26:07.346 --rc genhtml_legend=1 00:26:07.346 --rc geninfo_all_blocks=1 00:26:07.346 --rc geninfo_unexecuted_blocks=1 00:26:07.346 00:26:07.346 ' 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:07.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.346 --rc genhtml_branch_coverage=1 00:26:07.346 --rc genhtml_function_coverage=1 00:26:07.346 --rc genhtml_legend=1 00:26:07.346 --rc geninfo_all_blocks=1 00:26:07.346 --rc geninfo_unexecuted_blocks=1 00:26:07.346 00:26:07.346 ' 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:07.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.346 --rc genhtml_branch_coverage=1 00:26:07.346 --rc genhtml_function_coverage=1 00:26:07.346 --rc genhtml_legend=1 00:26:07.346 --rc geninfo_all_blocks=1 00:26:07.346 --rc geninfo_unexecuted_blocks=1 00:26:07.346 00:26:07.346 ' 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:07.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.346 --rc genhtml_branch_coverage=1 00:26:07.346 --rc genhtml_function_coverage=1 00:26:07.346 --rc genhtml_legend=1 00:26:07.346 --rc geninfo_all_blocks=1 00:26:07.346 --rc geninfo_unexecuted_blocks=1 00:26:07.346 00:26:07.346 ' 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:07.346 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:07.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:26:07.347 11:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:15.490 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:15.490 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:26:15.490 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:15.490 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:15.490 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:15.490 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:15.490 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:15.490 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:26:15.490 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:15.490 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:26:15.490 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:26:15.490 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:26:15.490 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:26:15.490 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:26:15.490 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:26:15.490 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:15.490 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:15.490 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:15.490 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:15.490 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:15.490 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:15.490 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:15.490 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:15.490 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:15.491 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:15.491 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:15.491 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:15.491 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:15.491 11:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:15.491 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:15.491 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:15.491 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:15.491 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:15.491 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:15.491 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:15.491 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:15.491 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:15.491 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:15.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:15.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.706 ms 00:26:15.491 00:26:15.491 --- 10.0.0.2 ping statistics --- 00:26:15.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:15.491 rtt min/avg/max/mdev = 0.706/0.706/0.706/0.000 ms 00:26:15.491 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:15.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:15.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.358 ms 00:26:15.491 00:26:15.491 --- 10.0.0.1 ping statistics --- 00:26:15.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:15.491 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:26:15.491 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:15.491 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:26:15.491 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:15.491 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:15.491 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:15.491 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:15.491 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:15.491 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:15.491 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:15.491 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:15.491 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:15.491 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:15.491 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:15.491 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2863751 00:26:15.491 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2863751 00:26:15.491 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2863751 ']' 00:26:15.491 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:15.491 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:15.491 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:15.491 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:15.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:15.491 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:15.492 11:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:15.492 [2024-11-20 11:27:07.393312] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:26:15.492 [2024-11-20 11:27:07.393379] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:15.492 [2024-11-20 11:27:07.491666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:15.492 [2024-11-20 11:27:07.544024] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:15.492 [2024-11-20 11:27:07.544080] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:15.492 [2024-11-20 11:27:07.544089] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:15.492 [2024-11-20 11:27:07.544096] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:15.492 [2024-11-20 11:27:07.544102] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:15.492 [2024-11-20 11:27:07.545779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:15.492 [2024-11-20 11:27:07.545784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:15.492 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:15.492 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:15.492 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:15.492 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:15.492 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:15.752 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:15.752 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2863751 00:26:15.752 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:15.752 [2024-11-20 11:27:08.417824] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:15.752 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:16.013 Malloc0 00:26:16.013 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:16.274 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:16.535 11:27:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:16.535 [2024-11-20 11:27:09.177085] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:16.535 11:27:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:16.795 [2024-11-20 11:27:09.345514] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:16.795 11:27:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:16.795 11:27:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2864130 00:26:16.795 11:27:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:16.795 11:27:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2864130 /var/tmp/bdevperf.sock 00:26:16.795 11:27:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2864130 ']' 00:26:16.795 11:27:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:16.795 11:27:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:16.795 11:27:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:16.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:16.795 11:27:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:16.795 11:27:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:17.736 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:17.736 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:17.736 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:17.736 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:18.307 Nvme0n1 00:26:18.307 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:18.568 Nvme0n1 00:26:18.568 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:18.568 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:21.113 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:21.113 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:21.113 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:21.113 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:22.054 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:22.054 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:22.054 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.054 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:22.054 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.054 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:22.054 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.054 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:22.315 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:22.315 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:22.315 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:22.315 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.580 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.580 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:22.580 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.580 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:22.841 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.841 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:22.841 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.841 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:22.841 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.841 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:22.841 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.841 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:23.103 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.103 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:23.103 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:23.364 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:23.364 11:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:24.749 11:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:24.749 11:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:24.749 11:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.749 11:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:24.749 11:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:24.749 11:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:24.749 11:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.749 11:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:24.749 11:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.749 11:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:24.749 11:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.749 11:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:25.010 11:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.010 11:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:25.010 11:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.010 11:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:25.270 11:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.270 11:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:25.270 11:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.270 11:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:25.270 11:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.270 11:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:25.270 11:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.270 11:27:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:25.531 11:27:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.531 11:27:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:25.531 11:27:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:25.792 11:27:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:25.792 11:27:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:27.175 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:27.175 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:27.175 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.175 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:27.175 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.175 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:27.175 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.175 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:27.175 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:27.175 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:27.175 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:27.175 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.434 11:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.434 11:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:27.434 11:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.434 11:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:27.693 11:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.693 11:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:27.693 11:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.693 11:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:27.954 11:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.954 11:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:27.954 11:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.954 11:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:27.954 11:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.954 11:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:27.954 11:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:28.215 11:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:28.475 11:27:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:29.419 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:29.419 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:29.419 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.419 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:29.680 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.680 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:29.680 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.680 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:29.680 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:29.680 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:29.680 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.680 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:29.941 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.941 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:29.941 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.941 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:30.202 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.202 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:30.202 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.202 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:30.202 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.202 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:30.202 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.202 11:27:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:30.463 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:30.463 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:30.463 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:30.723 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:30.723 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:32.112 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:32.112 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:32.112 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.112 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:32.112 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:32.112 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:32.112 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.112 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:32.112 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:32.112 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:32.112 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.112 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:32.373 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.373 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:32.373 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.373 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:32.634 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.634 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:32.634 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.634 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:32.634 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:32.634 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:32.634 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.634 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:32.895 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:32.895 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:32.895 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:33.156 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:33.156 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:34.205 11:27:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:34.205 11:27:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:34.205 11:27:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.205 11:27:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:34.552 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:34.552 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:34.552 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.552 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:34.552 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.552 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:34.552 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.552 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:34.814 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.814 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:34.814 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.814 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:35.076 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.076 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:35.076 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.076 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:35.076 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:35.076 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:35.076 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.076 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:35.338 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.338 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:35.600 11:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:35.600 11:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:35.861 11:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:35.861 11:27:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:36.804 11:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:36.804 11:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:36.804 11:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.804 11:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:37.066 11:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.066 11:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:37.066 11:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.066 11:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:37.328 11:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.328 11:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:37.328 11:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.328 11:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:37.588 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.588 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:37.588 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.588 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:37.588 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.588 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:37.588 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.588 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:37.849 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.849 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:37.849 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:37.849 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.111 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.111 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:38.111 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:38.111 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:38.372 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:39.314 11:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:39.314 11:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:39.314 11:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.314 11:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:39.576 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:39.576 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:39.576 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.576 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:39.837 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:39.837 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:39.837 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.837 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:39.837 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:39.837 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:39.837 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.837 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:40.098 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.098 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:40.098 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:40.098 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.359 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.359 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:40.359 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.359 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:40.359 11:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.359 11:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:40.359 11:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:40.621 11:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:40.882 11:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:41.824 11:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:41.824 11:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:41.824 11:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.824 11:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:42.086 11:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.086 11:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:42.086 11:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.086 11:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:42.086 11:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.086 11:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:42.086 11:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.086 11:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:42.347 11:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.347 11:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:42.347 11:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.347 11:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:42.607 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.607 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:42.607 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.607 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:42.868 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.868 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:42.868 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.868 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:42.868 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.868 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:42.868 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:43.129 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:43.390 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:44.331 11:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:44.331 11:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:44.331 11:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.331 11:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:44.331 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:44.331 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:44.331 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.590 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:44.590 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:44.590 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:44.590 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.590 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:44.850 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:44.850 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:44.850 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.850 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:45.111 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.111 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:45.111 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.111 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:45.111 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.111 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:45.111 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.111 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:45.372 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:45.372 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2864130 00:26:45.372 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2864130 ']' 00:26:45.372 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2864130 00:26:45.372 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:26:45.372 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:45.372 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2864130 00:26:45.372 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:45.372 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:45.372 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2864130' 00:26:45.372 killing process with pid 2864130 00:26:45.372 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2864130 00:26:45.372 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2864130 00:26:45.372 { 00:26:45.372 "results": [ 00:26:45.372 { 00:26:45.372 "job": "Nvme0n1", 00:26:45.372 "core_mask": "0x4", 00:26:45.372 "workload": "verify", 00:26:45.372 "status": "terminated", 00:26:45.372 "verify_range": { 00:26:45.372 "start": 0, 00:26:45.372 "length": 16384 00:26:45.372 }, 00:26:45.372 "queue_depth": 128, 00:26:45.372 "io_size": 4096, 00:26:45.372 "runtime": 26.653567, 00:26:45.372 "iops": 12195.891079043942, 00:26:45.372 "mibps": 47.6401995275154, 00:26:45.372 "io_failed": 0, 00:26:45.372 "io_timeout": 0, 00:26:45.372 "avg_latency_us": 10475.865266819252, 00:26:45.372 "min_latency_us": 737.28, 00:26:45.372 "max_latency_us": 3019898.88 00:26:45.372 } 00:26:45.372 ], 00:26:45.372 "core_count": 1 00:26:45.372 } 00:26:45.636 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2864130 00:26:45.636 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:45.636 [2024-11-20 11:27:09.400983] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:26:45.636 [2024-11-20 11:27:09.401063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2864130 ] 00:26:45.636 [2024-11-20 11:27:09.482565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.636 [2024-11-20 11:27:09.534148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:45.636 Running I/O for 90 seconds... 00:26:45.636 10955.00 IOPS, 42.79 MiB/s [2024-11-20T10:27:38.378Z] 11328.00 IOPS, 44.25 MiB/s [2024-11-20T10:27:38.378Z] 11892.00 IOPS, 46.45 MiB/s [2024-11-20T10:27:38.378Z] 12177.00 IOPS, 47.57 MiB/s [2024-11-20T10:27:38.378Z] 12315.40 IOPS, 48.11 MiB/s [2024-11-20T10:27:38.378Z] 12397.83 IOPS, 48.43 MiB/s [2024-11-20T10:27:38.378Z] 12465.71 IOPS, 48.69 MiB/s [2024-11-20T10:27:38.378Z] 12538.62 IOPS, 48.98 MiB/s [2024-11-20T10:27:38.378Z] 12593.89 IOPS, 49.19 MiB/s [2024-11-20T10:27:38.378Z] 12646.20 IOPS, 49.40 MiB/s [2024-11-20T10:27:38.378Z] 12685.09 IOPS, 49.55 MiB/s [2024-11-20T10:27:38.378Z] [2024-11-20 11:27:23.256022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:30776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.636 [2024-11-20 11:27:23.256054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:45.636 [2024-11-20 11:27:23.256085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:30784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.636 [2024-11-20 11:27:23.256093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:45.636 [2024-11-20 11:27:23.256106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:30792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.636 [2024-11-20 11:27:23.256113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:45.636 [2024-11-20 11:27:23.256124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:30800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.636 [2024-11-20 11:27:23.256129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:45.636 [2024-11-20 11:27:23.256139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:30808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.636 [2024-11-20 11:27:23.256144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:45.636 [2024-11-20 11:27:23.256154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:30816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.636 [2024-11-20 11:27:23.256166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:45.636 [2024-11-20 11:27:23.256177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.636 [2024-11-20 11:27:23.256182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:45.636 [2024-11-20 11:27:23.256193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.636 [2024-11-20 11:27:23.256198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:45.636 [2024-11-20 11:27:23.256208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:30840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.636 [2024-11-20 11:27:23.256214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.636 [2024-11-20 11:27:23.256224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:30848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.636 [2024-11-20 11:27:23.256236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.636 [2024-11-20 11:27:23.256246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:30856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.636 [2024-11-20 11:27:23.256252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:45.636 [2024-11-20 11:27:23.256262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:30864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.636 [2024-11-20 11:27:23.256267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:45.636 [2024-11-20 11:27:23.256278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:30872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.636 [2024-11-20 11:27:23.256283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.256296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.637 [2024-11-20 11:27:23.256303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.256315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:30888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.637 [2024-11-20 11:27:23.256320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.256331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:30896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.637 [2024-11-20 11:27:23.256336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.256346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.637 [2024-11-20 11:27:23.256352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.256362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:30912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.637 [2024-11-20 11:27:23.256368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.256378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:30920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.637 [2024-11-20 11:27:23.256383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.256394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.637 [2024-11-20 11:27:23.256399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.256409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:30936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.637 [2024-11-20 11:27:23.256414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.256425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.637 [2024-11-20 11:27:23.256430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.256443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:30952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.637 [2024-11-20 11:27:23.256448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.256459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:30960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.637 [2024-11-20 11:27:23.256464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.256475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.637 [2024-11-20 11:27:23.256484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.256613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:30648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.637 [2024-11-20 11:27:23.256621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.256633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:30968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.637 [2024-11-20 11:27:23.256639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.256650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:30976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.637 [2024-11-20 11:27:23.256655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.256668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:30984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.637 [2024-11-20 11:27:23.256677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.256689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.637 [2024-11-20 11:27:23.256694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.256706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.637 [2024-11-20 11:27:23.256711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.256722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:31008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.637 [2024-11-20 11:27:23.256727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.256739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:31016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.637 [2024-11-20 11:27:23.256745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.256757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:31024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.637 [2024-11-20 11:27:23.256762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.256775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:31032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.637 [2024-11-20 11:27:23.256781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.256793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:31040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.637 [2024-11-20 11:27:23.256798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.256810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:31048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.637 [2024-11-20 11:27:23.256815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.256827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.637 [2024-11-20 11:27:23.256832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.256844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:31064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.637 [2024-11-20 11:27:23.256849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.256862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:31072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.637 [2024-11-20 11:27:23.256869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.256881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.637 [2024-11-20 11:27:23.256886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.256898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.637 [2024-11-20 11:27:23.256903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.256914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:31096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.637 [2024-11-20 11:27:23.256919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.256930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.637 [2024-11-20 11:27:23.256936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.256947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.637 [2024-11-20 11:27:23.256952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.256963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:31120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.637 [2024-11-20 11:27:23.256969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.256980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:31128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.637 [2024-11-20 11:27:23.256987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.256998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.637 [2024-11-20 11:27:23.257004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.257015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:31144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.637 [2024-11-20 11:27:23.257020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.257031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:31152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.637 [2024-11-20 11:27:23.257036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.257048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:31160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.637 [2024-11-20 11:27:23.257053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:45.637 [2024-11-20 11:27:23.257065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:31168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:31176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:31184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:31192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:31200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:31208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:31224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:31232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:31240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:31248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:31256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:31264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:31272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:31280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:31288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:31296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:31304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:31312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:31328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:31336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:31344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:31360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:31368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:31384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:31392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:31400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:31408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:31416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:31424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:31432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:31448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:31456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:31464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:45.638 [2024-11-20 11:27:23.257889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.638 [2024-11-20 11:27:23.257894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.257908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:31480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.639 [2024-11-20 11:27:23.257913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.257928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:31488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.639 [2024-11-20 11:27:23.257933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.257947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.639 [2024-11-20 11:27:23.257952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.257966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:31504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.639 [2024-11-20 11:27:23.257971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.257985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:31512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.639 [2024-11-20 11:27:23.257991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.258005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:31520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.639 [2024-11-20 11:27:23.258011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.258025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:31528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.639 [2024-11-20 11:27:23.258031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.258074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:31536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.639 [2024-11-20 11:27:23.258080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.258096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:31544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.639 [2024-11-20 11:27:23.258101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.258116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:31552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.639 [2024-11-20 11:27:23.258121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.258136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.639 [2024-11-20 11:27:23.258142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.258157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:31568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.639 [2024-11-20 11:27:23.258166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.258182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:31576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.639 [2024-11-20 11:27:23.258193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.258209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:31584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.639 [2024-11-20 11:27:23.258214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.258229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:31592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.639 [2024-11-20 11:27:23.258234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.258269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:31600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.639 [2024-11-20 11:27:23.258276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.258291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.639 [2024-11-20 11:27:23.258297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.258312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:30656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.639 [2024-11-20 11:27:23.258320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.258335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.639 [2024-11-20 11:27:23.258340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.258356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:30672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.639 [2024-11-20 11:27:23.258361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.258377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.639 [2024-11-20 11:27:23.258382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.258397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.639 [2024-11-20 11:27:23.258402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.258417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:30696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.639 [2024-11-20 11:27:23.258423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.258438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.639 [2024-11-20 11:27:23.258443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.258458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:30712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.639 [2024-11-20 11:27:23.258464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.258479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.639 [2024-11-20 11:27:23.258484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.258499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.639 [2024-11-20 11:27:23.258505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.258520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:30736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.639 [2024-11-20 11:27:23.258525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.258540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.639 [2024-11-20 11:27:23.258545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.258561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:30752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.639 [2024-11-20 11:27:23.258566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.258582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.639 [2024-11-20 11:27:23.258587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.258602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.639 [2024-11-20 11:27:23.258608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.258623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:31616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.639 [2024-11-20 11:27:23.258628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.258643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:31624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.639 [2024-11-20 11:27:23.258648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.258664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.639 [2024-11-20 11:27:23.258669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.258684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:31640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.639 [2024-11-20 11:27:23.258690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.258705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:31648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.639 [2024-11-20 11:27:23.258710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:45.639 [2024-11-20 11:27:23.258725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:31656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.639 [2024-11-20 11:27:23.258731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:45.639 12607.17 IOPS, 49.25 MiB/s [2024-11-20T10:27:38.381Z] 11637.38 IOPS, 45.46 MiB/s [2024-11-20T10:27:38.381Z] 10806.14 IOPS, 42.21 MiB/s [2024-11-20T10:27:38.381Z] 10174.47 IOPS, 39.74 MiB/s [2024-11-20T10:27:38.382Z] 10361.88 IOPS, 40.48 MiB/s [2024-11-20T10:27:38.382Z] 10504.76 IOPS, 41.03 MiB/s [2024-11-20T10:27:38.382Z] 10920.50 IOPS, 42.66 MiB/s [2024-11-20T10:27:38.382Z] 11249.84 IOPS, 43.94 MiB/s [2024-11-20T10:27:38.382Z] 11427.10 IOPS, 44.64 MiB/s [2024-11-20T10:27:38.382Z] 11499.71 IOPS, 44.92 MiB/s [2024-11-20T10:27:38.382Z] 11564.05 IOPS, 45.17 MiB/s [2024-11-20T10:27:38.382Z] 11813.91 IOPS, 46.15 MiB/s [2024-11-20T10:27:38.382Z] 12032.92 IOPS, 47.00 MiB/s [2024-11-20T10:27:38.382Z] [2024-11-20 11:27:35.864777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.640 [2024-11-20 11:27:35.864809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:45.640 [2024-11-20 11:27:35.864838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.640 [2024-11-20 11:27:35.864846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:45.640 [2024-11-20 11:27:35.864860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.640 [2024-11-20 11:27:35.864865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:45.640 [2024-11-20 11:27:35.864880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.640 [2024-11-20 11:27:35.864886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:45.640 [2024-11-20 11:27:35.864896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.640 [2024-11-20 11:27:35.864901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:45.640 [2024-11-20 11:27:35.864912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.640 [2024-11-20 11:27:35.864917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:45.640 [2024-11-20 11:27:35.864927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.640 [2024-11-20 11:27:35.864932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:45.640 [2024-11-20 11:27:35.864943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.640 [2024-11-20 11:27:35.864948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:45.640 [2024-11-20 11:27:35.865571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.640 [2024-11-20 11:27:35.865578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:45.640 [2024-11-20 11:27:35.865589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.640 [2024-11-20 11:27:35.865595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:45.640 [2024-11-20 11:27:35.867721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.640 [2024-11-20 11:27:35.867737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:45.640 [2024-11-20 11:27:35.867749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.640 [2024-11-20 11:27:35.867757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:45.640 [2024-11-20 11:27:35.867769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.640 [2024-11-20 11:27:35.867778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:45.640 [2024-11-20 11:27:35.867789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.640 [2024-11-20 11:27:35.867794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:45.640 [2024-11-20 11:27:35.867804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.640 [2024-11-20 11:27:35.867809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.640 [2024-11-20 11:27:35.867822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.640 [2024-11-20 11:27:35.867827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:45.640 [2024-11-20 11:27:35.867837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.640 [2024-11-20 11:27:35.867843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:45.640 [2024-11-20 11:27:35.867853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.640 [2024-11-20 11:27:35.867858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:45.640 [2024-11-20 11:27:35.868393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.640 [2024-11-20 11:27:35.868402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:45.640 [2024-11-20 11:27:35.868413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.640 [2024-11-20 11:27:35.868419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:45.640 [2024-11-20 11:27:35.868430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.640 [2024-11-20 11:27:35.868435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:45.640 [2024-11-20 11:27:35.868446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.640 [2024-11-20 11:27:35.868451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:45.640 [2024-11-20 11:27:35.868462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.640 [2024-11-20 11:27:35.868467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:45.640 [2024-11-20 11:27:35.868478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.640 [2024-11-20 11:27:35.868483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:45.640 [2024-11-20 11:27:35.868493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.640 [2024-11-20 11:27:35.868498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:45.640 [2024-11-20 11:27:35.868509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:45.640 [2024-11-20 11:27:35.868515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:45.640 12144.52 IOPS, 47.44 MiB/s [2024-11-20T10:27:38.382Z] 12178.69 IOPS, 47.57 MiB/s [2024-11-20T10:27:38.382Z] Received shutdown signal, test time was about 26.654177 seconds 00:26:45.640 00:26:45.640 Latency(us) 00:26:45.640 [2024-11-20T10:27:38.382Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:45.640 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:45.640 Verification LBA range: start 0x0 length 0x4000 00:26:45.640 Nvme0n1 : 26.65 12195.89 47.64 0.00 0.00 10475.87 737.28 3019898.88 00:26:45.640 [2024-11-20T10:27:38.382Z] =================================================================================================================== 00:26:45.640 [2024-11-20T10:27:38.382Z] Total : 12195.89 47.64 0.00 0.00 10475.87 737.28 3019898.88 00:26:45.640 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:45.640 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:45.640 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:45.640 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:45.640 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:45.640 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:26:45.640 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:45.640 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:26:45.640 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:45.640 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:45.640 rmmod nvme_tcp 00:26:45.640 rmmod nvme_fabrics 00:26:45.905 rmmod nvme_keyring 00:26:45.905 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:45.905 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:26:45.905 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:26:45.905 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2863751 ']' 00:26:45.905 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2863751 00:26:45.905 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2863751 ']' 00:26:45.905 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2863751 00:26:45.905 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:26:45.905 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:45.905 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2863751 00:26:45.905 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:45.905 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:45.905 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2863751' 00:26:45.905 killing process with pid 2863751 00:26:45.905 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2863751 00:26:45.905 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2863751 00:26:45.906 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:45.906 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:45.906 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:45.906 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:26:45.906 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:45.906 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:26:45.906 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:26:45.906 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:45.906 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:45.906 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.906 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:45.906 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.454 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:48.454 00:26:48.454 real 0m41.106s 00:26:48.454 user 1m46.085s 00:26:48.454 sys 0m11.554s 00:26:48.454 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:48.454 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:48.454 ************************************ 00:26:48.454 END TEST nvmf_host_multipath_status 00:26:48.454 ************************************ 00:26:48.454 11:27:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:48.454 11:27:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:48.454 11:27:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:48.454 11:27:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.454 ************************************ 00:26:48.454 START TEST nvmf_discovery_remove_ifc 00:26:48.454 ************************************ 00:26:48.454 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:48.454 * Looking for test storage... 00:26:48.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:48.454 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:48.454 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:26:48.454 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:48.454 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:48.454 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:48.454 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:48.454 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:48.454 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:48.454 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:48.454 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:48.454 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:48.454 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:48.454 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:48.454 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:48.454 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:48.454 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:48.454 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:48.454 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:48.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.455 --rc genhtml_branch_coverage=1 00:26:48.455 --rc genhtml_function_coverage=1 00:26:48.455 --rc genhtml_legend=1 00:26:48.455 --rc geninfo_all_blocks=1 00:26:48.455 --rc geninfo_unexecuted_blocks=1 00:26:48.455 00:26:48.455 ' 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:48.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.455 --rc genhtml_branch_coverage=1 00:26:48.455 --rc genhtml_function_coverage=1 00:26:48.455 --rc genhtml_legend=1 00:26:48.455 --rc geninfo_all_blocks=1 00:26:48.455 --rc geninfo_unexecuted_blocks=1 00:26:48.455 00:26:48.455 ' 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:48.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.455 --rc genhtml_branch_coverage=1 00:26:48.455 --rc genhtml_function_coverage=1 00:26:48.455 --rc genhtml_legend=1 00:26:48.455 --rc geninfo_all_blocks=1 00:26:48.455 --rc geninfo_unexecuted_blocks=1 00:26:48.455 00:26:48.455 ' 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:48.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.455 --rc genhtml_branch_coverage=1 00:26:48.455 --rc genhtml_function_coverage=1 00:26:48.455 --rc genhtml_legend=1 00:26:48.455 --rc geninfo_all_blocks=1 00:26:48.455 --rc geninfo_unexecuted_blocks=1 00:26:48.455 00:26:48.455 ' 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:48.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:48.455 11:27:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:48.455 11:27:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:48.455 11:27:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.455 11:27:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:48.455 11:27:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.455 11:27:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:48.455 11:27:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:48.455 11:27:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:26:48.455 11:27:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.602 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:56.602 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:56.602 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:56.602 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:56.602 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:56.602 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:56.603 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:56.603 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:56.603 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:56.603 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:56.603 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:56.604 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:56.604 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.506 ms 00:26:56.604 00:26:56.604 --- 10.0.0.2 ping statistics --- 00:26:56.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.604 rtt min/avg/max/mdev = 0.506/0.506/0.506/0.000 ms 00:26:56.604 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:56.604 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:56.604 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:26:56.604 00:26:56.604 --- 10.0.0.1 ping statistics --- 00:26:56.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.604 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:26:56.604 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:56.604 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:26:56.604 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:56.604 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:56.604 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:56.604 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:56.604 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:56.604 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:56.604 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:56.604 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:56.604 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:56.604 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:56.604 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.604 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2874639 00:26:56.604 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2874639 00:26:56.604 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:56.604 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2874639 ']' 00:26:56.604 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:56.604 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:56.604 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:56.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:56.604 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:56.604 11:27:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.604 [2024-11-20 11:27:48.550720] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:26:56.604 [2024-11-20 11:27:48.550790] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:56.604 [2024-11-20 11:27:48.649104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.604 [2024-11-20 11:27:48.699112] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:56.604 [2024-11-20 11:27:48.699172] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:56.604 [2024-11-20 11:27:48.699180] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:56.604 [2024-11-20 11:27:48.699188] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:56.604 [2024-11-20 11:27:48.699194] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:56.604 [2024-11-20 11:27:48.699944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.871 11:27:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:56.871 11:27:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:56.871 11:27:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:56.871 11:27:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:56.871 11:27:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.871 11:27:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:56.871 11:27:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:56.871 11:27:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.871 11:27:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.871 [2024-11-20 11:27:49.425783] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:56.871 [2024-11-20 11:27:49.434022] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:56.871 null0 00:26:56.871 [2024-11-20 11:27:49.465977] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:56.871 11:27:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.872 11:27:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2874816 00:26:56.872 11:27:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:56.872 11:27:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2874816 /tmp/host.sock 00:26:56.872 11:27:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2874816 ']' 00:26:56.872 11:27:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:56.872 11:27:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:56.872 11:27:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:56.872 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:56.872 11:27:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:56.872 11:27:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.872 [2024-11-20 11:27:49.554662] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:26:56.872 [2024-11-20 11:27:49.554757] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2874816 ] 00:26:57.133 [2024-11-20 11:27:49.648822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.133 [2024-11-20 11:27:49.701126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.706 11:27:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:57.706 11:27:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:57.706 11:27:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:57.706 11:27:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:57.706 11:27:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.706 11:27:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.706 11:27:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.706 11:27:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:57.706 11:27:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.706 11:27:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.966 11:27:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.967 11:27:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:57.967 11:27:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.967 11:27:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:58.910 [2024-11-20 11:27:51.474665] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:58.910 [2024-11-20 11:27:51.474705] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:58.910 [2024-11-20 11:27:51.474720] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:58.910 [2024-11-20 11:27:51.605138] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:59.171 [2024-11-20 11:27:51.704485] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:59.172 [2024-11-20 11:27:51.705779] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x15ce3f0:1 started. 00:26:59.172 [2024-11-20 11:27:51.707649] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:59.172 [2024-11-20 11:27:51.707717] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:59.172 [2024-11-20 11:27:51.707745] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:59.172 [2024-11-20 11:27:51.707764] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:59.172 [2024-11-20 11:27:51.707796] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:59.172 11:27:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.172 11:27:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:59.172 11:27:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:59.172 [2024-11-20 11:27:51.713733] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x15ce3f0 was disconnected and freed. delete nvme_qpair. 00:26:59.172 11:27:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:59.172 11:27:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:59.172 11:27:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.172 11:27:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:59.172 11:27:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:59.172 11:27:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:59.172 11:27:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.172 11:27:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:59.172 11:27:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:59.172 11:27:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:59.172 11:27:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:59.172 11:27:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:59.172 11:27:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:59.172 11:27:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:59.172 11:27:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.172 11:27:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:59.172 11:27:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:59.172 11:27:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:59.433 11:27:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.433 11:27:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:59.433 11:27:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:00.375 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:00.375 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:00.375 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:00.375 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.375 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:00.375 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:00.375 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:00.375 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.375 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:00.375 11:27:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:01.319 11:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:01.319 11:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:01.319 11:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:01.319 11:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.319 11:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:01.319 11:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:01.319 11:27:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:01.319 11:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.319 11:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:01.319 11:27:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:02.705 11:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:02.705 11:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:02.705 11:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:02.705 11:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.705 11:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:02.705 11:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:02.705 11:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:02.705 11:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.705 11:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:02.705 11:27:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:03.650 11:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:03.650 11:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:03.650 11:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:03.650 11:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.650 11:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:03.650 11:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:03.650 11:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:03.650 11:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.650 11:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:03.650 11:27:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:04.594 [2024-11-20 11:27:57.147667] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:04.594 [2024-11-20 11:27:57.147704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.594 [2024-11-20 11:27:57.147712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.594 [2024-11-20 11:27:57.147719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.594 [2024-11-20 11:27:57.147725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.594 [2024-11-20 11:27:57.147731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.594 [2024-11-20 11:27:57.147736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.594 [2024-11-20 11:27:57.147741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.594 [2024-11-20 11:27:57.147746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.594 [2024-11-20 11:27:57.147756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:04.594 [2024-11-20 11:27:57.147761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.594 [2024-11-20 11:27:57.147766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15aac00 is same with the state(6) to be set 00:27:04.594 [2024-11-20 11:27:57.157688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15aac00 (9): Bad file descriptor 00:27:04.594 11:27:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:04.594 11:27:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:04.594 11:27:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:04.594 11:27:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.594 11:27:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:04.594 11:27:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:04.594 11:27:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:04.594 [2024-11-20 11:27:57.167721] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:04.594 [2024-11-20 11:27:57.167732] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:04.594 [2024-11-20 11:27:57.167735] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:04.594 [2024-11-20 11:27:57.167739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:04.594 [2024-11-20 11:27:57.167755] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:05.535 [2024-11-20 11:27:58.171255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:05.535 [2024-11-20 11:27:58.171346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15aac00 with addr=10.0.0.2, port=4420 00:27:05.535 [2024-11-20 11:27:58.171377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15aac00 is same with the state(6) to be set 00:27:05.535 [2024-11-20 11:27:58.171433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15aac00 (9): Bad file descriptor 00:27:05.535 [2024-11-20 11:27:58.171574] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:27:05.535 [2024-11-20 11:27:58.171632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:05.535 [2024-11-20 11:27:58.171654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:05.535 [2024-11-20 11:27:58.171678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:05.535 [2024-11-20 11:27:58.171698] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:05.535 [2024-11-20 11:27:58.171714] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:05.535 [2024-11-20 11:27:58.171728] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:05.535 [2024-11-20 11:27:58.171750] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:05.535 [2024-11-20 11:27:58.171765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:05.535 11:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.535 11:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:05.535 11:27:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:06.477 [2024-11-20 11:27:59.174171] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:06.477 [2024-11-20 11:27:59.174186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:06.477 [2024-11-20 11:27:59.174194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:06.477 [2024-11-20 11:27:59.174199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:06.477 [2024-11-20 11:27:59.174204] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:27:06.477 [2024-11-20 11:27:59.174209] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:06.477 [2024-11-20 11:27:59.174213] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:06.477 [2024-11-20 11:27:59.174216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:06.477 [2024-11-20 11:27:59.174235] bdev_nvme.c:7229:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:06.477 [2024-11-20 11:27:59.174251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.477 [2024-11-20 11:27:59.174258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.477 [2024-11-20 11:27:59.174265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.477 [2024-11-20 11:27:59.174270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.477 [2024-11-20 11:27:59.174276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.477 [2024-11-20 11:27:59.174281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.477 [2024-11-20 11:27:59.174286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.477 [2024-11-20 11:27:59.174291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.477 [2024-11-20 11:27:59.174297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.477 [2024-11-20 11:27:59.174302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.477 [2024-11-20 11:27:59.174308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:27:06.477 [2024-11-20 11:27:59.174545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159a340 (9): Bad file descriptor 00:27:06.477 [2024-11-20 11:27:59.175555] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:06.477 [2024-11-20 11:27:59.175562] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:27:06.477 11:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:06.477 11:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:06.477 11:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:06.477 11:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.477 11:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:06.477 11:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:06.477 11:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:06.477 11:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.737 11:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:06.737 11:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:06.737 11:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:06.737 11:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:06.737 11:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:06.737 11:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:06.737 11:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:06.737 11:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.737 11:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:06.738 11:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:06.738 11:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:06.738 11:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.738 11:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:06.738 11:27:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:07.678 11:28:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:07.678 11:28:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:07.678 11:28:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:07.678 11:28:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.678 11:28:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:07.678 11:28:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:07.678 11:28:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:07.678 11:28:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.938 11:28:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:07.938 11:28:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:08.510 [2024-11-20 11:28:01.229105] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:08.510 [2024-11-20 11:28:01.229120] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:08.510 [2024-11-20 11:28:01.229130] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:08.770 [2024-11-20 11:28:01.357507] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:08.770 11:28:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:08.770 11:28:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:08.770 11:28:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:08.770 11:28:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.770 11:28:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:08.771 11:28:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:08.771 11:28:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:08.771 11:28:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.771 [2024-11-20 11:28:01.459300] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:27:08.771 [2024-11-20 11:28:01.460000] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x159f130:1 started. 00:27:08.771 [2024-11-20 11:28:01.460902] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:08.771 [2024-11-20 11:28:01.460929] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:08.771 [2024-11-20 11:28:01.460945] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:08.771 [2024-11-20 11:28:01.460957] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:08.771 [2024-11-20 11:28:01.460963] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:08.771 [2024-11-20 11:28:01.467589] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x159f130 was disconnected and freed. delete nvme_qpair. 00:27:08.771 11:28:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:08.771 11:28:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:10.154 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:10.154 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:10.154 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:10.154 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.154 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:10.154 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:10.154 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:10.154 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.154 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:10.154 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:10.154 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2874816 00:27:10.154 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2874816 ']' 00:27:10.154 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2874816 00:27:10.154 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:10.154 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:10.154 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2874816 00:27:10.154 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:10.154 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:10.154 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2874816' 00:27:10.154 killing process with pid 2874816 00:27:10.154 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2874816 00:27:10.154 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2874816 00:27:10.154 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:10.154 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:10.155 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:27:10.155 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:10.155 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:27:10.155 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:10.155 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:10.155 rmmod nvme_tcp 00:27:10.155 rmmod nvme_fabrics 00:27:10.155 rmmod nvme_keyring 00:27:10.155 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:10.155 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:27:10.155 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:27:10.155 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2874639 ']' 00:27:10.155 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2874639 00:27:10.155 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2874639 ']' 00:27:10.155 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2874639 00:27:10.155 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:10.155 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:10.155 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2874639 00:27:10.155 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:10.155 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:10.155 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2874639' 00:27:10.155 killing process with pid 2874639 00:27:10.155 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2874639 00:27:10.155 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2874639 00:27:10.417 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:10.417 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:10.417 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:10.417 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:27:10.417 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:27:10.417 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:10.417 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:27:10.417 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:10.417 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:10.417 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.417 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:10.417 11:28:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.353 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:12.353 00:27:12.353 real 0m24.275s 00:27:12.353 user 0m29.232s 00:27:12.353 sys 0m7.146s 00:27:12.353 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:12.353 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:12.353 ************************************ 00:27:12.353 END TEST nvmf_discovery_remove_ifc 00:27:12.353 ************************************ 00:27:12.353 11:28:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:12.353 11:28:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:12.353 11:28:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:12.353 11:28:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.614 ************************************ 00:27:12.614 START TEST nvmf_identify_kernel_target 00:27:12.614 ************************************ 00:27:12.614 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:12.614 * Looking for test storage... 00:27:12.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:12.614 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:12.614 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:27:12.614 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:12.614 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:12.614 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:12.614 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:12.614 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:12.614 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:27:12.614 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:27:12.614 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:27:12.614 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:27:12.614 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:27:12.614 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:27:12.614 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:27:12.614 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:12.614 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:27:12.614 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:27:12.614 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:12.614 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:12.614 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:27:12.614 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:27:12.614 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:12.614 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:27:12.614 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:27:12.614 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:27:12.614 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:27:12.614 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:12.614 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:27:12.614 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:27:12.614 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:12.614 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:12.614 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:12.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.615 --rc genhtml_branch_coverage=1 00:27:12.615 --rc genhtml_function_coverage=1 00:27:12.615 --rc genhtml_legend=1 00:27:12.615 --rc geninfo_all_blocks=1 00:27:12.615 --rc geninfo_unexecuted_blocks=1 00:27:12.615 00:27:12.615 ' 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:12.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.615 --rc genhtml_branch_coverage=1 00:27:12.615 --rc genhtml_function_coverage=1 00:27:12.615 --rc genhtml_legend=1 00:27:12.615 --rc geninfo_all_blocks=1 00:27:12.615 --rc geninfo_unexecuted_blocks=1 00:27:12.615 00:27:12.615 ' 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:12.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.615 --rc genhtml_branch_coverage=1 00:27:12.615 --rc genhtml_function_coverage=1 00:27:12.615 --rc genhtml_legend=1 00:27:12.615 --rc geninfo_all_blocks=1 00:27:12.615 --rc geninfo_unexecuted_blocks=1 00:27:12.615 00:27:12.615 ' 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:12.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.615 --rc genhtml_branch_coverage=1 00:27:12.615 --rc genhtml_function_coverage=1 00:27:12.615 --rc genhtml_legend=1 00:27:12.615 --rc geninfo_all_blocks=1 00:27:12.615 --rc geninfo_unexecuted_blocks=1 00:27:12.615 00:27:12.615 ' 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:12.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:12.615 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:12.875 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:27:12.875 11:28:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:21.021 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:21.021 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:21.021 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:21.022 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:21.022 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:21.022 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:21.022 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:27:21.022 00:27:21.022 --- 10.0.0.2 ping statistics --- 00:27:21.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.022 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:21.022 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:21.022 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:27:21.022 00:27:21.022 --- 10.0.0.1 ping statistics --- 00:27:21.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.022 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:21.022 11:28:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:23.570 Waiting for block devices as requested 00:27:23.832 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:23.832 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:23.832 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:24.093 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:24.093 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:24.093 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:24.354 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:24.354 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:24.354 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:24.614 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:24.614 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:24.874 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:24.874 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:24.874 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:25.134 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:25.134 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:25.134 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:25.396 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:25.396 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:25.396 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:25.396 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:25.396 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:25.396 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:25.396 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:25.396 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:25.396 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:25.657 No valid GPT data, bailing 00:27:25.657 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:25.657 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:25.657 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:25.657 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:25.657 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:25.657 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:25.657 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:25.657 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:25.657 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:25.657 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:27:25.657 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:25.657 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:27:25.657 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:25.657 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:27:25.657 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:27:25.657 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:27:25.657 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:25.657 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:25.657 00:27:25.657 Discovery Log Number of Records 2, Generation counter 2 00:27:25.657 =====Discovery Log Entry 0====== 00:27:25.657 trtype: tcp 00:27:25.657 adrfam: ipv4 00:27:25.657 subtype: current discovery subsystem 00:27:25.657 treq: not specified, sq flow control disable supported 00:27:25.657 portid: 1 00:27:25.657 trsvcid: 4420 00:27:25.657 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:25.657 traddr: 10.0.0.1 00:27:25.657 eflags: none 00:27:25.657 sectype: none 00:27:25.657 =====Discovery Log Entry 1====== 00:27:25.657 trtype: tcp 00:27:25.657 adrfam: ipv4 00:27:25.657 subtype: nvme subsystem 00:27:25.657 treq: not specified, sq flow control disable supported 00:27:25.657 portid: 1 00:27:25.657 trsvcid: 4420 00:27:25.657 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:25.657 traddr: 10.0.0.1 00:27:25.657 eflags: none 00:27:25.657 sectype: none 00:27:25.657 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:25.657 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:25.920 ===================================================== 00:27:25.920 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:25.920 ===================================================== 00:27:25.920 Controller Capabilities/Features 00:27:25.920 ================================ 00:27:25.920 Vendor ID: 0000 00:27:25.920 Subsystem Vendor ID: 0000 00:27:25.920 Serial Number: 8a8097d77fd0f510eaa5 00:27:25.920 Model Number: Linux 00:27:25.920 Firmware Version: 6.8.9-20 00:27:25.920 Recommended Arb Burst: 0 00:27:25.920 IEEE OUI Identifier: 00 00 00 00:27:25.920 Multi-path I/O 00:27:25.920 May have multiple subsystem ports: No 00:27:25.920 May have multiple controllers: No 00:27:25.920 Associated with SR-IOV VF: No 00:27:25.920 Max Data Transfer Size: Unlimited 00:27:25.920 Max Number of Namespaces: 0 00:27:25.920 Max Number of I/O Queues: 1024 00:27:25.920 NVMe Specification Version (VS): 1.3 00:27:25.920 NVMe Specification Version (Identify): 1.3 00:27:25.920 Maximum Queue Entries: 1024 00:27:25.920 Contiguous Queues Required: No 00:27:25.920 Arbitration Mechanisms Supported 00:27:25.920 Weighted Round Robin: Not Supported 00:27:25.920 Vendor Specific: Not Supported 00:27:25.920 Reset Timeout: 7500 ms 00:27:25.920 Doorbell Stride: 4 bytes 00:27:25.920 NVM Subsystem Reset: Not Supported 00:27:25.920 Command Sets Supported 00:27:25.920 NVM Command Set: Supported 00:27:25.920 Boot Partition: Not Supported 00:27:25.920 Memory Page Size Minimum: 4096 bytes 00:27:25.920 Memory Page Size Maximum: 4096 bytes 00:27:25.920 Persistent Memory Region: Not Supported 00:27:25.920 Optional Asynchronous Events Supported 00:27:25.920 Namespace Attribute Notices: Not Supported 00:27:25.920 Firmware Activation Notices: Not Supported 00:27:25.920 ANA Change Notices: Not Supported 00:27:25.920 PLE Aggregate Log Change Notices: Not Supported 00:27:25.920 LBA Status Info Alert Notices: Not Supported 00:27:25.920 EGE Aggregate Log Change Notices: Not Supported 00:27:25.920 Normal NVM Subsystem Shutdown event: Not Supported 00:27:25.920 Zone Descriptor Change Notices: Not Supported 00:27:25.920 Discovery Log Change Notices: Supported 00:27:25.920 Controller Attributes 00:27:25.920 128-bit Host Identifier: Not Supported 00:27:25.920 Non-Operational Permissive Mode: Not Supported 00:27:25.920 NVM Sets: Not Supported 00:27:25.920 Read Recovery Levels: Not Supported 00:27:25.920 Endurance Groups: Not Supported 00:27:25.920 Predictable Latency Mode: Not Supported 00:27:25.920 Traffic Based Keep ALive: Not Supported 00:27:25.920 Namespace Granularity: Not Supported 00:27:25.920 SQ Associations: Not Supported 00:27:25.920 UUID List: Not Supported 00:27:25.920 Multi-Domain Subsystem: Not Supported 00:27:25.920 Fixed Capacity Management: Not Supported 00:27:25.920 Variable Capacity Management: Not Supported 00:27:25.920 Delete Endurance Group: Not Supported 00:27:25.920 Delete NVM Set: Not Supported 00:27:25.920 Extended LBA Formats Supported: Not Supported 00:27:25.920 Flexible Data Placement Supported: Not Supported 00:27:25.920 00:27:25.920 Controller Memory Buffer Support 00:27:25.920 ================================ 00:27:25.920 Supported: No 00:27:25.920 00:27:25.920 Persistent Memory Region Support 00:27:25.920 ================================ 00:27:25.920 Supported: No 00:27:25.920 00:27:25.920 Admin Command Set Attributes 00:27:25.920 ============================ 00:27:25.920 Security Send/Receive: Not Supported 00:27:25.920 Format NVM: Not Supported 00:27:25.920 Firmware Activate/Download: Not Supported 00:27:25.920 Namespace Management: Not Supported 00:27:25.920 Device Self-Test: Not Supported 00:27:25.920 Directives: Not Supported 00:27:25.920 NVMe-MI: Not Supported 00:27:25.920 Virtualization Management: Not Supported 00:27:25.920 Doorbell Buffer Config: Not Supported 00:27:25.920 Get LBA Status Capability: Not Supported 00:27:25.920 Command & Feature Lockdown Capability: Not Supported 00:27:25.920 Abort Command Limit: 1 00:27:25.920 Async Event Request Limit: 1 00:27:25.920 Number of Firmware Slots: N/A 00:27:25.920 Firmware Slot 1 Read-Only: N/A 00:27:25.920 Firmware Activation Without Reset: N/A 00:27:25.920 Multiple Update Detection Support: N/A 00:27:25.920 Firmware Update Granularity: No Information Provided 00:27:25.920 Per-Namespace SMART Log: No 00:27:25.920 Asymmetric Namespace Access Log Page: Not Supported 00:27:25.920 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:25.920 Command Effects Log Page: Not Supported 00:27:25.920 Get Log Page Extended Data: Supported 00:27:25.920 Telemetry Log Pages: Not Supported 00:27:25.920 Persistent Event Log Pages: Not Supported 00:27:25.920 Supported Log Pages Log Page: May Support 00:27:25.920 Commands Supported & Effects Log Page: Not Supported 00:27:25.920 Feature Identifiers & Effects Log Page:May Support 00:27:25.920 NVMe-MI Commands & Effects Log Page: May Support 00:27:25.920 Data Area 4 for Telemetry Log: Not Supported 00:27:25.920 Error Log Page Entries Supported: 1 00:27:25.920 Keep Alive: Not Supported 00:27:25.920 00:27:25.920 NVM Command Set Attributes 00:27:25.920 ========================== 00:27:25.920 Submission Queue Entry Size 00:27:25.920 Max: 1 00:27:25.920 Min: 1 00:27:25.920 Completion Queue Entry Size 00:27:25.920 Max: 1 00:27:25.920 Min: 1 00:27:25.920 Number of Namespaces: 0 00:27:25.920 Compare Command: Not Supported 00:27:25.920 Write Uncorrectable Command: Not Supported 00:27:25.920 Dataset Management Command: Not Supported 00:27:25.920 Write Zeroes Command: Not Supported 00:27:25.920 Set Features Save Field: Not Supported 00:27:25.920 Reservations: Not Supported 00:27:25.920 Timestamp: Not Supported 00:27:25.920 Copy: Not Supported 00:27:25.920 Volatile Write Cache: Not Present 00:27:25.920 Atomic Write Unit (Normal): 1 00:27:25.920 Atomic Write Unit (PFail): 1 00:27:25.920 Atomic Compare & Write Unit: 1 00:27:25.920 Fused Compare & Write: Not Supported 00:27:25.920 Scatter-Gather List 00:27:25.920 SGL Command Set: Supported 00:27:25.920 SGL Keyed: Not Supported 00:27:25.920 SGL Bit Bucket Descriptor: Not Supported 00:27:25.920 SGL Metadata Pointer: Not Supported 00:27:25.920 Oversized SGL: Not Supported 00:27:25.920 SGL Metadata Address: Not Supported 00:27:25.920 SGL Offset: Supported 00:27:25.920 Transport SGL Data Block: Not Supported 00:27:25.920 Replay Protected Memory Block: Not Supported 00:27:25.920 00:27:25.920 Firmware Slot Information 00:27:25.920 ========================= 00:27:25.920 Active slot: 0 00:27:25.920 00:27:25.920 00:27:25.920 Error Log 00:27:25.920 ========= 00:27:25.920 00:27:25.920 Active Namespaces 00:27:25.920 ================= 00:27:25.920 Discovery Log Page 00:27:25.920 ================== 00:27:25.920 Generation Counter: 2 00:27:25.920 Number of Records: 2 00:27:25.920 Record Format: 0 00:27:25.920 00:27:25.920 Discovery Log Entry 0 00:27:25.920 ---------------------- 00:27:25.920 Transport Type: 3 (TCP) 00:27:25.920 Address Family: 1 (IPv4) 00:27:25.920 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:25.920 Entry Flags: 00:27:25.920 Duplicate Returned Information: 0 00:27:25.920 Explicit Persistent Connection Support for Discovery: 0 00:27:25.920 Transport Requirements: 00:27:25.920 Secure Channel: Not Specified 00:27:25.920 Port ID: 1 (0x0001) 00:27:25.920 Controller ID: 65535 (0xffff) 00:27:25.920 Admin Max SQ Size: 32 00:27:25.920 Transport Service Identifier: 4420 00:27:25.920 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:25.920 Transport Address: 10.0.0.1 00:27:25.920 Discovery Log Entry 1 00:27:25.920 ---------------------- 00:27:25.920 Transport Type: 3 (TCP) 00:27:25.920 Address Family: 1 (IPv4) 00:27:25.920 Subsystem Type: 2 (NVM Subsystem) 00:27:25.920 Entry Flags: 00:27:25.920 Duplicate Returned Information: 0 00:27:25.920 Explicit Persistent Connection Support for Discovery: 0 00:27:25.920 Transport Requirements: 00:27:25.920 Secure Channel: Not Specified 00:27:25.920 Port ID: 1 (0x0001) 00:27:25.920 Controller ID: 65535 (0xffff) 00:27:25.920 Admin Max SQ Size: 32 00:27:25.920 Transport Service Identifier: 4420 00:27:25.920 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:25.920 Transport Address: 10.0.0.1 00:27:25.920 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:25.921 get_feature(0x01) failed 00:27:25.921 get_feature(0x02) failed 00:27:25.921 get_feature(0x04) failed 00:27:25.921 ===================================================== 00:27:25.921 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:25.921 ===================================================== 00:27:25.921 Controller Capabilities/Features 00:27:25.921 ================================ 00:27:25.921 Vendor ID: 0000 00:27:25.921 Subsystem Vendor ID: 0000 00:27:25.921 Serial Number: a6947472028f76bacaec 00:27:25.921 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:25.921 Firmware Version: 6.8.9-20 00:27:25.921 Recommended Arb Burst: 6 00:27:25.921 IEEE OUI Identifier: 00 00 00 00:27:25.921 Multi-path I/O 00:27:25.921 May have multiple subsystem ports: Yes 00:27:25.921 May have multiple controllers: Yes 00:27:25.921 Associated with SR-IOV VF: No 00:27:25.921 Max Data Transfer Size: Unlimited 00:27:25.921 Max Number of Namespaces: 1024 00:27:25.921 Max Number of I/O Queues: 128 00:27:25.921 NVMe Specification Version (VS): 1.3 00:27:25.921 NVMe Specification Version (Identify): 1.3 00:27:25.921 Maximum Queue Entries: 1024 00:27:25.921 Contiguous Queues Required: No 00:27:25.921 Arbitration Mechanisms Supported 00:27:25.921 Weighted Round Robin: Not Supported 00:27:25.921 Vendor Specific: Not Supported 00:27:25.921 Reset Timeout: 7500 ms 00:27:25.921 Doorbell Stride: 4 bytes 00:27:25.921 NVM Subsystem Reset: Not Supported 00:27:25.921 Command Sets Supported 00:27:25.921 NVM Command Set: Supported 00:27:25.921 Boot Partition: Not Supported 00:27:25.921 Memory Page Size Minimum: 4096 bytes 00:27:25.921 Memory Page Size Maximum: 4096 bytes 00:27:25.921 Persistent Memory Region: Not Supported 00:27:25.921 Optional Asynchronous Events Supported 00:27:25.921 Namespace Attribute Notices: Supported 00:27:25.921 Firmware Activation Notices: Not Supported 00:27:25.921 ANA Change Notices: Supported 00:27:25.921 PLE Aggregate Log Change Notices: Not Supported 00:27:25.921 LBA Status Info Alert Notices: Not Supported 00:27:25.921 EGE Aggregate Log Change Notices: Not Supported 00:27:25.921 Normal NVM Subsystem Shutdown event: Not Supported 00:27:25.921 Zone Descriptor Change Notices: Not Supported 00:27:25.921 Discovery Log Change Notices: Not Supported 00:27:25.921 Controller Attributes 00:27:25.921 128-bit Host Identifier: Supported 00:27:25.921 Non-Operational Permissive Mode: Not Supported 00:27:25.921 NVM Sets: Not Supported 00:27:25.921 Read Recovery Levels: Not Supported 00:27:25.921 Endurance Groups: Not Supported 00:27:25.921 Predictable Latency Mode: Not Supported 00:27:25.921 Traffic Based Keep ALive: Supported 00:27:25.921 Namespace Granularity: Not Supported 00:27:25.921 SQ Associations: Not Supported 00:27:25.921 UUID List: Not Supported 00:27:25.921 Multi-Domain Subsystem: Not Supported 00:27:25.921 Fixed Capacity Management: Not Supported 00:27:25.921 Variable Capacity Management: Not Supported 00:27:25.921 Delete Endurance Group: Not Supported 00:27:25.921 Delete NVM Set: Not Supported 00:27:25.921 Extended LBA Formats Supported: Not Supported 00:27:25.921 Flexible Data Placement Supported: Not Supported 00:27:25.921 00:27:25.921 Controller Memory Buffer Support 00:27:25.921 ================================ 00:27:25.921 Supported: No 00:27:25.921 00:27:25.921 Persistent Memory Region Support 00:27:25.921 ================================ 00:27:25.921 Supported: No 00:27:25.921 00:27:25.921 Admin Command Set Attributes 00:27:25.921 ============================ 00:27:25.921 Security Send/Receive: Not Supported 00:27:25.921 Format NVM: Not Supported 00:27:25.921 Firmware Activate/Download: Not Supported 00:27:25.921 Namespace Management: Not Supported 00:27:25.921 Device Self-Test: Not Supported 00:27:25.921 Directives: Not Supported 00:27:25.921 NVMe-MI: Not Supported 00:27:25.921 Virtualization Management: Not Supported 00:27:25.921 Doorbell Buffer Config: Not Supported 00:27:25.921 Get LBA Status Capability: Not Supported 00:27:25.921 Command & Feature Lockdown Capability: Not Supported 00:27:25.921 Abort Command Limit: 4 00:27:25.921 Async Event Request Limit: 4 00:27:25.921 Number of Firmware Slots: N/A 00:27:25.921 Firmware Slot 1 Read-Only: N/A 00:27:25.921 Firmware Activation Without Reset: N/A 00:27:25.921 Multiple Update Detection Support: N/A 00:27:25.921 Firmware Update Granularity: No Information Provided 00:27:25.921 Per-Namespace SMART Log: Yes 00:27:25.921 Asymmetric Namespace Access Log Page: Supported 00:27:25.921 ANA Transition Time : 10 sec 00:27:25.921 00:27:25.921 Asymmetric Namespace Access Capabilities 00:27:25.921 ANA Optimized State : Supported 00:27:25.921 ANA Non-Optimized State : Supported 00:27:25.921 ANA Inaccessible State : Supported 00:27:25.921 ANA Persistent Loss State : Supported 00:27:25.921 ANA Change State : Supported 00:27:25.921 ANAGRPID is not changed : No 00:27:25.921 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:25.921 00:27:25.921 ANA Group Identifier Maximum : 128 00:27:25.921 Number of ANA Group Identifiers : 128 00:27:25.921 Max Number of Allowed Namespaces : 1024 00:27:25.921 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:25.921 Command Effects Log Page: Supported 00:27:25.921 Get Log Page Extended Data: Supported 00:27:25.921 Telemetry Log Pages: Not Supported 00:27:25.921 Persistent Event Log Pages: Not Supported 00:27:25.921 Supported Log Pages Log Page: May Support 00:27:25.921 Commands Supported & Effects Log Page: Not Supported 00:27:25.921 Feature Identifiers & Effects Log Page:May Support 00:27:25.921 NVMe-MI Commands & Effects Log Page: May Support 00:27:25.921 Data Area 4 for Telemetry Log: Not Supported 00:27:25.921 Error Log Page Entries Supported: 128 00:27:25.921 Keep Alive: Supported 00:27:25.921 Keep Alive Granularity: 1000 ms 00:27:25.921 00:27:25.921 NVM Command Set Attributes 00:27:25.921 ========================== 00:27:25.921 Submission Queue Entry Size 00:27:25.921 Max: 64 00:27:25.921 Min: 64 00:27:25.921 Completion Queue Entry Size 00:27:25.921 Max: 16 00:27:25.921 Min: 16 00:27:25.921 Number of Namespaces: 1024 00:27:25.921 Compare Command: Not Supported 00:27:25.921 Write Uncorrectable Command: Not Supported 00:27:25.921 Dataset Management Command: Supported 00:27:25.921 Write Zeroes Command: Supported 00:27:25.921 Set Features Save Field: Not Supported 00:27:25.921 Reservations: Not Supported 00:27:25.921 Timestamp: Not Supported 00:27:25.921 Copy: Not Supported 00:27:25.921 Volatile Write Cache: Present 00:27:25.921 Atomic Write Unit (Normal): 1 00:27:25.921 Atomic Write Unit (PFail): 1 00:27:25.921 Atomic Compare & Write Unit: 1 00:27:25.921 Fused Compare & Write: Not Supported 00:27:25.921 Scatter-Gather List 00:27:25.921 SGL Command Set: Supported 00:27:25.921 SGL Keyed: Not Supported 00:27:25.921 SGL Bit Bucket Descriptor: Not Supported 00:27:25.921 SGL Metadata Pointer: Not Supported 00:27:25.921 Oversized SGL: Not Supported 00:27:25.921 SGL Metadata Address: Not Supported 00:27:25.921 SGL Offset: Supported 00:27:25.921 Transport SGL Data Block: Not Supported 00:27:25.921 Replay Protected Memory Block: Not Supported 00:27:25.921 00:27:25.921 Firmware Slot Information 00:27:25.921 ========================= 00:27:25.921 Active slot: 0 00:27:25.921 00:27:25.921 Asymmetric Namespace Access 00:27:25.921 =========================== 00:27:25.921 Change Count : 0 00:27:25.921 Number of ANA Group Descriptors : 1 00:27:25.921 ANA Group Descriptor : 0 00:27:25.921 ANA Group ID : 1 00:27:25.921 Number of NSID Values : 1 00:27:25.921 Change Count : 0 00:27:25.921 ANA State : 1 00:27:25.921 Namespace Identifier : 1 00:27:25.921 00:27:25.921 Commands Supported and Effects 00:27:25.921 ============================== 00:27:25.921 Admin Commands 00:27:25.921 -------------- 00:27:25.921 Get Log Page (02h): Supported 00:27:25.921 Identify (06h): Supported 00:27:25.921 Abort (08h): Supported 00:27:25.921 Set Features (09h): Supported 00:27:25.921 Get Features (0Ah): Supported 00:27:25.921 Asynchronous Event Request (0Ch): Supported 00:27:25.921 Keep Alive (18h): Supported 00:27:25.921 I/O Commands 00:27:25.921 ------------ 00:27:25.921 Flush (00h): Supported 00:27:25.921 Write (01h): Supported LBA-Change 00:27:25.921 Read (02h): Supported 00:27:25.921 Write Zeroes (08h): Supported LBA-Change 00:27:25.921 Dataset Management (09h): Supported 00:27:25.921 00:27:25.921 Error Log 00:27:25.921 ========= 00:27:25.921 Entry: 0 00:27:25.921 Error Count: 0x3 00:27:25.921 Submission Queue Id: 0x0 00:27:25.921 Command Id: 0x5 00:27:25.921 Phase Bit: 0 00:27:25.921 Status Code: 0x2 00:27:25.922 Status Code Type: 0x0 00:27:25.922 Do Not Retry: 1 00:27:25.922 Error Location: 0x28 00:27:25.922 LBA: 0x0 00:27:25.922 Namespace: 0x0 00:27:25.922 Vendor Log Page: 0x0 00:27:25.922 ----------- 00:27:25.922 Entry: 1 00:27:25.922 Error Count: 0x2 00:27:25.922 Submission Queue Id: 0x0 00:27:25.922 Command Id: 0x5 00:27:25.922 Phase Bit: 0 00:27:25.922 Status Code: 0x2 00:27:25.922 Status Code Type: 0x0 00:27:25.922 Do Not Retry: 1 00:27:25.922 Error Location: 0x28 00:27:25.922 LBA: 0x0 00:27:25.922 Namespace: 0x0 00:27:25.922 Vendor Log Page: 0x0 00:27:25.922 ----------- 00:27:25.922 Entry: 2 00:27:25.922 Error Count: 0x1 00:27:25.922 Submission Queue Id: 0x0 00:27:25.922 Command Id: 0x4 00:27:25.922 Phase Bit: 0 00:27:25.922 Status Code: 0x2 00:27:25.922 Status Code Type: 0x0 00:27:25.922 Do Not Retry: 1 00:27:25.922 Error Location: 0x28 00:27:25.922 LBA: 0x0 00:27:25.922 Namespace: 0x0 00:27:25.922 Vendor Log Page: 0x0 00:27:25.922 00:27:25.922 Number of Queues 00:27:25.922 ================ 00:27:25.922 Number of I/O Submission Queues: 128 00:27:25.922 Number of I/O Completion Queues: 128 00:27:25.922 00:27:25.922 ZNS Specific Controller Data 00:27:25.922 ============================ 00:27:25.922 Zone Append Size Limit: 0 00:27:25.922 00:27:25.922 00:27:25.922 Active Namespaces 00:27:25.922 ================= 00:27:25.922 get_feature(0x05) failed 00:27:25.922 Namespace ID:1 00:27:25.922 Command Set Identifier: NVM (00h) 00:27:25.922 Deallocate: Supported 00:27:25.922 Deallocated/Unwritten Error: Not Supported 00:27:25.922 Deallocated Read Value: Unknown 00:27:25.922 Deallocate in Write Zeroes: Not Supported 00:27:25.922 Deallocated Guard Field: 0xFFFF 00:27:25.922 Flush: Supported 00:27:25.922 Reservation: Not Supported 00:27:25.922 Namespace Sharing Capabilities: Multiple Controllers 00:27:25.922 Size (in LBAs): 3750748848 (1788GiB) 00:27:25.922 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:25.922 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:25.922 UUID: 87d0a4d6-525f-4f35-8d38-04b8be090a1c 00:27:25.922 Thin Provisioning: Not Supported 00:27:25.922 Per-NS Atomic Units: Yes 00:27:25.922 Atomic Write Unit (Normal): 8 00:27:25.922 Atomic Write Unit (PFail): 8 00:27:25.922 Preferred Write Granularity: 8 00:27:25.922 Atomic Compare & Write Unit: 8 00:27:25.922 Atomic Boundary Size (Normal): 0 00:27:25.922 Atomic Boundary Size (PFail): 0 00:27:25.922 Atomic Boundary Offset: 0 00:27:25.922 NGUID/EUI64 Never Reused: No 00:27:25.922 ANA group ID: 1 00:27:25.922 Namespace Write Protected: No 00:27:25.922 Number of LBA Formats: 1 00:27:25.922 Current LBA Format: LBA Format #00 00:27:25.922 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:25.922 00:27:25.922 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:25.922 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:25.922 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:27:25.922 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:25.922 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:27:25.922 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:25.922 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:25.922 rmmod nvme_tcp 00:27:25.922 rmmod nvme_fabrics 00:27:25.922 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:25.922 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:27:25.922 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:27:25.922 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:27:25.922 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:25.922 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:25.922 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:25.922 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:27:25.922 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:27:25.922 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:25.922 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:27:25.922 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:25.922 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:25.922 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.922 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:25.922 11:28:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:28.553 11:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:28.553 11:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:28.553 11:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:28.553 11:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:27:28.553 11:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:28.553 11:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:28.553 11:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:28.553 11:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:28.553 11:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:28.553 11:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:28.553 11:28:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:31.863 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:31.863 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:31.863 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:31.863 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:31.863 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:31.863 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:31.863 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:31.863 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:31.863 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:31.863 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:31.863 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:31.863 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:31.863 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:31.863 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:31.863 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:31.863 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:31.863 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:32.125 00:27:32.125 real 0m19.684s 00:27:32.125 user 0m5.318s 00:27:32.125 sys 0m11.363s 00:27:32.125 11:28:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:32.125 11:28:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:32.125 ************************************ 00:27:32.125 END TEST nvmf_identify_kernel_target 00:27:32.125 ************************************ 00:27:32.125 11:28:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:32.125 11:28:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:32.125 11:28:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:32.125 11:28:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.387 ************************************ 00:27:32.387 START TEST nvmf_auth_host 00:27:32.387 ************************************ 00:27:32.387 11:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:32.387 * Looking for test storage... 00:27:32.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:32.387 11:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:32.387 11:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:27:32.387 11:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:32.387 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:32.387 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:32.387 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:32.387 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:32.387 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:32.387 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:32.387 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:32.387 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:32.387 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:32.387 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:32.387 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:32.387 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:32.387 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:32.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.388 --rc genhtml_branch_coverage=1 00:27:32.388 --rc genhtml_function_coverage=1 00:27:32.388 --rc genhtml_legend=1 00:27:32.388 --rc geninfo_all_blocks=1 00:27:32.388 --rc geninfo_unexecuted_blocks=1 00:27:32.388 00:27:32.388 ' 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:32.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.388 --rc genhtml_branch_coverage=1 00:27:32.388 --rc genhtml_function_coverage=1 00:27:32.388 --rc genhtml_legend=1 00:27:32.388 --rc geninfo_all_blocks=1 00:27:32.388 --rc geninfo_unexecuted_blocks=1 00:27:32.388 00:27:32.388 ' 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:32.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.388 --rc genhtml_branch_coverage=1 00:27:32.388 --rc genhtml_function_coverage=1 00:27:32.388 --rc genhtml_legend=1 00:27:32.388 --rc geninfo_all_blocks=1 00:27:32.388 --rc geninfo_unexecuted_blocks=1 00:27:32.388 00:27:32.388 ' 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:32.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:32.388 --rc genhtml_branch_coverage=1 00:27:32.388 --rc genhtml_function_coverage=1 00:27:32.388 --rc genhtml_legend=1 00:27:32.388 --rc geninfo_all_blocks=1 00:27:32.388 --rc geninfo_unexecuted_blocks=1 00:27:32.388 00:27:32.388 ' 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:32.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:32.388 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.650 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:32.650 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.650 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:32.650 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:32.650 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:32.650 11:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:40.798 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:40.798 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:40.798 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:40.798 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:40.798 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:40.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:40.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.703 ms 00:27:40.799 00:27:40.799 --- 10.0.0.2 ping statistics --- 00:27:40.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.799 rtt min/avg/max/mdev = 0.703/0.703/0.703/0.000 ms 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:40.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:40.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:27:40.799 00:27:40.799 --- 10.0.0.1 ping statistics --- 00:27:40.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.799 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2889341 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2889341 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2889341 ']' 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1a8a75d58f6342806fd6a8a6a583e47e 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.mwE 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1a8a75d58f6342806fd6a8a6a583e47e 0 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1a8a75d58f6342806fd6a8a6a583e47e 0 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1a8a75d58f6342806fd6a8a6a583e47e 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:40.799 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.mwE 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.mwE 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.mwE 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=661207f452c5a231a4eac3e4778b3c71ba3842e6409ffcfc7b771530abbad947 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.uMg 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 661207f452c5a231a4eac3e4778b3c71ba3842e6409ffcfc7b771530abbad947 3 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 661207f452c5a231a4eac3e4778b3c71ba3842e6409ffcfc7b771530abbad947 3 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=661207f452c5a231a4eac3e4778b3c71ba3842e6409ffcfc7b771530abbad947 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.uMg 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.uMg 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.uMg 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=13c36472caf9f4fbab6ea32a23c9ede1fb5cd5019b89417f 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.vdO 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 13c36472caf9f4fbab6ea32a23c9ede1fb5cd5019b89417f 0 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 13c36472caf9f4fbab6ea32a23c9ede1fb5cd5019b89417f 0 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=13c36472caf9f4fbab6ea32a23c9ede1fb5cd5019b89417f 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.vdO 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.vdO 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.vdO 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c43a32b46e18d69f3f2456c1a17590148b0a82b9b4a8b296 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.kPy 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c43a32b46e18d69f3f2456c1a17590148b0a82b9b4a8b296 2 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c43a32b46e18d69f3f2456c1a17590148b0a82b9b4a8b296 2 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:40.799 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c43a32b46e18d69f3f2456c1a17590148b0a82b9b4a8b296 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.kPy 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.kPy 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.kPy 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5d47966e764f12b1e58c1564b09888be 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.JrF 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5d47966e764f12b1e58c1564b09888be 1 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5d47966e764f12b1e58c1564b09888be 1 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5d47966e764f12b1e58c1564b09888be 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.JrF 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.JrF 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.JrF 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e8ff60cce4636efc22280c3d9a921f81 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.szN 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e8ff60cce4636efc22280c3d9a921f81 1 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e8ff60cce4636efc22280c3d9a921f81 1 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e8ff60cce4636efc22280c3d9a921f81 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.szN 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.szN 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.szN 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=dff082ae6d170884e91334805fbee821d1e147f199bc7caf 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.yIX 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key dff082ae6d170884e91334805fbee821d1e147f199bc7caf 2 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 dff082ae6d170884e91334805fbee821d1e147f199bc7caf 2 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=dff082ae6d170884e91334805fbee821d1e147f199bc7caf 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.yIX 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.yIX 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.yIX 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e83960126e1f91056d4e98b9aee0d78d 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.xkW 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e83960126e1f91056d4e98b9aee0d78d 0 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e83960126e1f91056d4e98b9aee0d78d 0 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e83960126e1f91056d4e98b9aee0d78d 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.xkW 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.xkW 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.xkW 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:40.800 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ed8eb401d6d22cd4d804553a2df8e489c8b67a1f1463692c84165a18c1546469 00:27:41.063 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:41.063 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.p4s 00:27:41.063 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ed8eb401d6d22cd4d804553a2df8e489c8b67a1f1463692c84165a18c1546469 3 00:27:41.063 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ed8eb401d6d22cd4d804553a2df8e489c8b67a1f1463692c84165a18c1546469 3 00:27:41.063 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:41.063 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:41.063 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ed8eb401d6d22cd4d804553a2df8e489c8b67a1f1463692c84165a18c1546469 00:27:41.063 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:41.063 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:41.063 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.p4s 00:27:41.063 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.p4s 00:27:41.063 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.p4s 00:27:41.063 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:41.063 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2889341 00:27:41.063 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2889341 ']' 00:27:41.063 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:41.063 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:41.063 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:41.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:41.063 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:41.063 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.063 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:41.063 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:27:41.063 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:41.063 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.mwE 00:27:41.063 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.063 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.063 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.063 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.uMg ]] 00:27:41.063 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.uMg 00:27:41.063 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.063 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.324 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.324 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:41.324 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.vdO 00:27:41.324 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.324 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.324 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.324 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.kPy ]] 00:27:41.324 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.kPy 00:27:41.324 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.324 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.324 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.324 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:41.324 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.JrF 00:27:41.324 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.324 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.324 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.324 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.szN ]] 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.szN 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.yIX 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.xkW ]] 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.xkW 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.p4s 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:41.325 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:44.625 Waiting for block devices as requested 00:27:44.625 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:44.885 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:44.885 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:44.885 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:45.145 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:45.145 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:45.145 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:45.145 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:45.406 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:45.666 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:45.666 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:45.666 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:45.666 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:45.926 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:45.926 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:45.926 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:46.186 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:47.129 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:47.129 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:47.130 No valid GPT data, bailing 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:47.130 00:27:47.130 Discovery Log Number of Records 2, Generation counter 2 00:27:47.130 =====Discovery Log Entry 0====== 00:27:47.130 trtype: tcp 00:27:47.130 adrfam: ipv4 00:27:47.130 subtype: current discovery subsystem 00:27:47.130 treq: not specified, sq flow control disable supported 00:27:47.130 portid: 1 00:27:47.130 trsvcid: 4420 00:27:47.130 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:47.130 traddr: 10.0.0.1 00:27:47.130 eflags: none 00:27:47.130 sectype: none 00:27:47.130 =====Discovery Log Entry 1====== 00:27:47.130 trtype: tcp 00:27:47.130 adrfam: ipv4 00:27:47.130 subtype: nvme subsystem 00:27:47.130 treq: not specified, sq flow control disable supported 00:27:47.130 portid: 1 00:27:47.130 trsvcid: 4420 00:27:47.130 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:47.130 traddr: 10.0.0.1 00:27:47.130 eflags: none 00:27:47.130 sectype: none 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNjMzY0NzJjYWY5ZjRmYmFiNmVhMzJhMjNjOWVkZTFmYjVjZDUwMTliODk0MTdmlCVBog==: 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNjMzY0NzJjYWY5ZjRmYmFiNmVhMzJhMjNjOWVkZTFmYjVjZDUwMTliODk0MTdmlCVBog==: 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: ]] 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.130 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.391 nvme0n1 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWE4YTc1ZDU4ZjYzNDI4MDZmZDZhOGE2YTU4M2U0N2XB5muN: 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWE4YTc1ZDU4ZjYzNDI4MDZmZDZhOGE2YTU4M2U0N2XB5muN: 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: ]] 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.391 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.652 nvme0n1 00:27:47.652 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.652 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.652 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.652 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.652 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.652 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.652 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.652 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.652 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.652 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.652 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.652 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.652 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:47.652 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.652 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:47.652 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:47.652 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:47.652 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNjMzY0NzJjYWY5ZjRmYmFiNmVhMzJhMjNjOWVkZTFmYjVjZDUwMTliODk0MTdmlCVBog==: 00:27:47.652 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: 00:27:47.652 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:47.652 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:47.652 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNjMzY0NzJjYWY5ZjRmYmFiNmVhMzJhMjNjOWVkZTFmYjVjZDUwMTliODk0MTdmlCVBog==: 00:27:47.652 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: ]] 00:27:47.652 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: 00:27:47.652 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:47.653 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.653 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:47.653 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:47.653 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:47.653 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.653 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:47.653 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.653 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.653 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.653 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.653 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.653 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.653 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.653 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.653 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.653 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.653 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.653 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.653 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.653 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.653 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:47.653 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.653 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.653 nvme0n1 00:27:47.653 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.653 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.653 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.653 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.653 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.653 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.913 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.913 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.913 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.913 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.913 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.913 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.913 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:47.913 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.913 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:47.913 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:47.913 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:47.913 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWQ0Nzk2NmU3NjRmMTJiMWU1OGMxNTY0YjA5ODg4YmU98iLh: 00:27:47.913 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: 00:27:47.913 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:47.913 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:47.913 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWQ0Nzk2NmU3NjRmMTJiMWU1OGMxNTY0YjA5ODg4YmU98iLh: 00:27:47.913 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: ]] 00:27:47.913 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: 00:27:47.913 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:47.913 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.913 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:47.913 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:47.913 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:47.913 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.913 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:47.913 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.913 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.913 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.913 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.913 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.913 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.914 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.914 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.914 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.914 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.914 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.914 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.914 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.914 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.914 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:47.914 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.914 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.914 nvme0n1 00:27:47.914 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.914 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.914 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.914 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.914 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.914 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.914 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.914 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.914 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.914 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGZmMDgyYWU2ZDE3MDg4NGU5MTMzNDgwNWZiZWU4MjFkMWUxNDdmMTk5YmM3Y2FmvaldYg==: 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGZmMDgyYWU2ZDE3MDg4NGU5MTMzNDgwNWZiZWU4MjFkMWUxNDdmMTk5YmM3Y2FmvaldYg==: 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: ]] 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.176 nvme0n1 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWQ4ZWI0MDFkNmQyMmNkNGQ4MDQ1NTNhMmRmOGU0ODljOGI2N2ExZjE0NjM2OTJjODQxNjVhMThjMTU0NjQ2Oc//n7k=: 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWQ4ZWI0MDFkNmQyMmNkNGQ4MDQ1NTNhMmRmOGU0ODljOGI2N2ExZjE0NjM2OTJjODQxNjVhMThjMTU0NjQ2Oc//n7k=: 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.176 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.437 nvme0n1 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWE4YTc1ZDU4ZjYzNDI4MDZmZDZhOGE2YTU4M2U0N2XB5muN: 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWE4YTc1ZDU4ZjYzNDI4MDZmZDZhOGE2YTU4M2U0N2XB5muN: 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: ]] 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.437 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.438 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:48.438 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.438 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.698 nvme0n1 00:27:48.698 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.698 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.698 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.698 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.698 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.698 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.698 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.698 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.698 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.698 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.698 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.698 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.698 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:48.698 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.698 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.698 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:48.698 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:48.698 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNjMzY0NzJjYWY5ZjRmYmFiNmVhMzJhMjNjOWVkZTFmYjVjZDUwMTliODk0MTdmlCVBog==: 00:27:48.698 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: 00:27:48.698 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.698 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:48.699 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNjMzY0NzJjYWY5ZjRmYmFiNmVhMzJhMjNjOWVkZTFmYjVjZDUwMTliODk0MTdmlCVBog==: 00:27:48.699 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: ]] 00:27:48.699 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: 00:27:48.699 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:48.699 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.699 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.699 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:48.699 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:48.699 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.699 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:48.699 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.699 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.699 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.699 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.699 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.699 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.699 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.699 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.699 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.699 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.699 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.699 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.699 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.699 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.699 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:48.699 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.699 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.959 nvme0n1 00:27:48.959 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.959 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.959 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.959 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.959 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.959 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.959 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWQ0Nzk2NmU3NjRmMTJiMWU1OGMxNTY0YjA5ODg4YmU98iLh: 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWQ0Nzk2NmU3NjRmMTJiMWU1OGMxNTY0YjA5ODg4YmU98iLh: 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: ]] 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.960 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.221 nvme0n1 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGZmMDgyYWU2ZDE3MDg4NGU5MTMzNDgwNWZiZWU4MjFkMWUxNDdmMTk5YmM3Y2FmvaldYg==: 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGZmMDgyYWU2ZDE3MDg4NGU5MTMzNDgwNWZiZWU4MjFkMWUxNDdmMTk5YmM3Y2FmvaldYg==: 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: ]] 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.221 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.482 nvme0n1 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWQ4ZWI0MDFkNmQyMmNkNGQ4MDQ1NTNhMmRmOGU0ODljOGI2N2ExZjE0NjM2OTJjODQxNjVhMThjMTU0NjQ2Oc//n7k=: 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWQ4ZWI0MDFkNmQyMmNkNGQ4MDQ1NTNhMmRmOGU0ODljOGI2N2ExZjE0NjM2OTJjODQxNjVhMThjMTU0NjQ2Oc//n7k=: 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.482 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.742 nvme0n1 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWE4YTc1ZDU4ZjYzNDI4MDZmZDZhOGE2YTU4M2U0N2XB5muN: 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWE4YTc1ZDU4ZjYzNDI4MDZmZDZhOGE2YTU4M2U0N2XB5muN: 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: ]] 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.743 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.003 nvme0n1 00:27:50.004 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.004 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.004 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.004 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.004 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.004 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNjMzY0NzJjYWY5ZjRmYmFiNmVhMzJhMjNjOWVkZTFmYjVjZDUwMTliODk0MTdmlCVBog==: 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNjMzY0NzJjYWY5ZjRmYmFiNmVhMzJhMjNjOWVkZTFmYjVjZDUwMTliODk0MTdmlCVBog==: 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: ]] 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.265 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:50.266 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:50.266 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:50.266 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:50.266 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.266 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.526 nvme0n1 00:27:50.526 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.526 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWQ0Nzk2NmU3NjRmMTJiMWU1OGMxNTY0YjA5ODg4YmU98iLh: 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWQ0Nzk2NmU3NjRmMTJiMWU1OGMxNTY0YjA5ODg4YmU98iLh: 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: ]] 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.527 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.788 nvme0n1 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGZmMDgyYWU2ZDE3MDg4NGU5MTMzNDgwNWZiZWU4MjFkMWUxNDdmMTk5YmM3Y2FmvaldYg==: 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGZmMDgyYWU2ZDE3MDg4NGU5MTMzNDgwNWZiZWU4MjFkMWUxNDdmMTk5YmM3Y2FmvaldYg==: 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: ]] 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.788 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.050 nvme0n1 00:27:51.050 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.050 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.050 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.050 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.050 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.050 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.050 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.050 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.050 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.050 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWQ4ZWI0MDFkNmQyMmNkNGQ4MDQ1NTNhMmRmOGU0ODljOGI2N2ExZjE0NjM2OTJjODQxNjVhMThjMTU0NjQ2Oc//n7k=: 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWQ4ZWI0MDFkNmQyMmNkNGQ4MDQ1NTNhMmRmOGU0ODljOGI2N2ExZjE0NjM2OTJjODQxNjVhMThjMTU0NjQ2Oc//n7k=: 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.311 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.573 nvme0n1 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWE4YTc1ZDU4ZjYzNDI4MDZmZDZhOGE2YTU4M2U0N2XB5muN: 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWE4YTc1ZDU4ZjYzNDI4MDZmZDZhOGE2YTU4M2U0N2XB5muN: 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: ]] 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.573 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.146 nvme0n1 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNjMzY0NzJjYWY5ZjRmYmFiNmVhMzJhMjNjOWVkZTFmYjVjZDUwMTliODk0MTdmlCVBog==: 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNjMzY0NzJjYWY5ZjRmYmFiNmVhMzJhMjNjOWVkZTFmYjVjZDUwMTliODk0MTdmlCVBog==: 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: ]] 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.146 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.407 nvme0n1 00:27:52.407 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.407 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.407 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.407 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.407 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.407 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.407 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.407 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.407 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.407 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.407 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.407 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.407 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:52.407 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.407 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.407 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:52.407 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:52.407 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWQ0Nzk2NmU3NjRmMTJiMWU1OGMxNTY0YjA5ODg4YmU98iLh: 00:27:52.407 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: 00:27:52.407 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.407 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:52.407 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWQ0Nzk2NmU3NjRmMTJiMWU1OGMxNTY0YjA5ODg4YmU98iLh: 00:27:52.407 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: ]] 00:27:52.407 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: 00:27:52.407 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:52.407 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.407 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:52.407 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:52.407 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:52.407 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.407 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:52.407 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.407 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.667 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.667 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.667 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.667 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.667 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.667 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.667 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.667 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.667 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.668 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.668 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.668 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.668 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:52.668 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.668 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.928 nvme0n1 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGZmMDgyYWU2ZDE3MDg4NGU5MTMzNDgwNWZiZWU4MjFkMWUxNDdmMTk5YmM3Y2FmvaldYg==: 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGZmMDgyYWU2ZDE3MDg4NGU5MTMzNDgwNWZiZWU4MjFkMWUxNDdmMTk5YmM3Y2FmvaldYg==: 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: ]] 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.928 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.929 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.929 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.929 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.929 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.929 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.929 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.929 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.929 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.929 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:52.929 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.929 11:28:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.500 nvme0n1 00:27:53.500 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.500 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.500 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.500 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.500 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.500 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.500 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.500 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.500 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.500 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.500 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.500 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.500 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:53.500 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.500 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:53.500 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:53.500 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:53.500 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWQ4ZWI0MDFkNmQyMmNkNGQ4MDQ1NTNhMmRmOGU0ODljOGI2N2ExZjE0NjM2OTJjODQxNjVhMThjMTU0NjQ2Oc//n7k=: 00:27:53.500 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:53.500 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:53.500 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:53.500 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWQ4ZWI0MDFkNmQyMmNkNGQ4MDQ1NTNhMmRmOGU0ODljOGI2N2ExZjE0NjM2OTJjODQxNjVhMThjMTU0NjQ2Oc//n7k=: 00:27:53.500 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:53.500 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:53.500 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.500 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:53.500 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:53.500 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:53.500 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.500 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:53.500 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.500 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.500 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.500 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.500 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.501 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.501 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.501 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.501 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.501 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.501 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.501 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.501 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.501 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.501 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:53.501 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.501 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.072 nvme0n1 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWE4YTc1ZDU4ZjYzNDI4MDZmZDZhOGE2YTU4M2U0N2XB5muN: 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWE4YTc1ZDU4ZjYzNDI4MDZmZDZhOGE2YTU4M2U0N2XB5muN: 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: ]] 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.072 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.644 nvme0n1 00:27:54.644 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.644 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.644 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.644 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.644 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNjMzY0NzJjYWY5ZjRmYmFiNmVhMzJhMjNjOWVkZTFmYjVjZDUwMTliODk0MTdmlCVBog==: 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNjMzY0NzJjYWY5ZjRmYmFiNmVhMzJhMjNjOWVkZTFmYjVjZDUwMTliODk0MTdmlCVBog==: 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: ]] 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.645 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.587 nvme0n1 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWQ0Nzk2NmU3NjRmMTJiMWU1OGMxNTY0YjA5ODg4YmU98iLh: 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWQ0Nzk2NmU3NjRmMTJiMWU1OGMxNTY0YjA5ODg4YmU98iLh: 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: ]] 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.587 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.588 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.588 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.588 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:55.588 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.588 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.159 nvme0n1 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGZmMDgyYWU2ZDE3MDg4NGU5MTMzNDgwNWZiZWU4MjFkMWUxNDdmMTk5YmM3Y2FmvaldYg==: 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGZmMDgyYWU2ZDE3MDg4NGU5MTMzNDgwNWZiZWU4MjFkMWUxNDdmMTk5YmM3Y2FmvaldYg==: 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: ]] 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.160 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.732 nvme0n1 00:27:56.732 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.732 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.732 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.732 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.732 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWQ4ZWI0MDFkNmQyMmNkNGQ4MDQ1NTNhMmRmOGU0ODljOGI2N2ExZjE0NjM2OTJjODQxNjVhMThjMTU0NjQ2Oc//n7k=: 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWQ4ZWI0MDFkNmQyMmNkNGQ4MDQ1NTNhMmRmOGU0ODljOGI2N2ExZjE0NjM2OTJjODQxNjVhMThjMTU0NjQ2Oc//n7k=: 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.993 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.565 nvme0n1 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWE4YTc1ZDU4ZjYzNDI4MDZmZDZhOGE2YTU4M2U0N2XB5muN: 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWE4YTc1ZDU4ZjYzNDI4MDZmZDZhOGE2YTU4M2U0N2XB5muN: 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: ]] 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.565 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.566 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.566 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.566 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.566 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.566 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.566 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.566 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.566 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.566 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.566 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.566 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:57.566 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.566 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.828 nvme0n1 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNjMzY0NzJjYWY5ZjRmYmFiNmVhMzJhMjNjOWVkZTFmYjVjZDUwMTliODk0MTdmlCVBog==: 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNjMzY0NzJjYWY5ZjRmYmFiNmVhMzJhMjNjOWVkZTFmYjVjZDUwMTliODk0MTdmlCVBog==: 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: ]] 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.828 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.090 nvme0n1 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWQ0Nzk2NmU3NjRmMTJiMWU1OGMxNTY0YjA5ODg4YmU98iLh: 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWQ0Nzk2NmU3NjRmMTJiMWU1OGMxNTY0YjA5ODg4YmU98iLh: 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: ]] 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.090 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.351 nvme0n1 00:27:58.351 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.351 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.351 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.351 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.351 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.351 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.351 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.351 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.351 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.351 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.351 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.351 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.351 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:58.351 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.351 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:58.351 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:58.351 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:58.351 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGZmMDgyYWU2ZDE3MDg4NGU5MTMzNDgwNWZiZWU4MjFkMWUxNDdmMTk5YmM3Y2FmvaldYg==: 00:27:58.351 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: 00:27:58.351 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:58.351 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:58.351 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGZmMDgyYWU2ZDE3MDg4NGU5MTMzNDgwNWZiZWU4MjFkMWUxNDdmMTk5YmM3Y2FmvaldYg==: 00:27:58.351 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: ]] 00:27:58.351 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: 00:27:58.351 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:58.351 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.351 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:58.351 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:58.351 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:58.351 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.351 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:58.351 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.351 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.351 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.352 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.352 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.352 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.352 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.352 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.352 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.352 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.352 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.352 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.352 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.352 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.352 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:58.352 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.352 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.612 nvme0n1 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWQ4ZWI0MDFkNmQyMmNkNGQ4MDQ1NTNhMmRmOGU0ODljOGI2N2ExZjE0NjM2OTJjODQxNjVhMThjMTU0NjQ2Oc//n7k=: 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWQ4ZWI0MDFkNmQyMmNkNGQ4MDQ1NTNhMmRmOGU0ODljOGI2N2ExZjE0NjM2OTJjODQxNjVhMThjMTU0NjQ2Oc//n7k=: 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.613 nvme0n1 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.613 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.874 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.874 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.874 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.874 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.874 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.874 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.874 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:58.874 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.874 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:58.874 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.874 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:58.874 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:58.874 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:58.874 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWE4YTc1ZDU4ZjYzNDI4MDZmZDZhOGE2YTU4M2U0N2XB5muN: 00:27:58.874 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: 00:27:58.874 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:58.874 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:58.874 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWE4YTc1ZDU4ZjYzNDI4MDZmZDZhOGE2YTU4M2U0N2XB5muN: 00:27:58.874 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: ]] 00:27:58.874 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: 00:27:58.874 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:58.874 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.874 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:58.874 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:58.874 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:58.875 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.875 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:58.875 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.875 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.875 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.875 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.875 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:58.875 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:58.875 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:58.875 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.875 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.875 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:58.875 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.875 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:58.875 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:58.875 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:58.875 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:58.875 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.875 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.875 nvme0n1 00:27:58.875 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.875 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.875 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.875 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.136 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.136 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.136 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.136 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.136 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.136 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.136 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.136 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.136 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:59.136 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.136 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.137 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:59.137 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:59.137 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNjMzY0NzJjYWY5ZjRmYmFiNmVhMzJhMjNjOWVkZTFmYjVjZDUwMTliODk0MTdmlCVBog==: 00:27:59.137 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: 00:27:59.137 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.137 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:59.137 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNjMzY0NzJjYWY5ZjRmYmFiNmVhMzJhMjNjOWVkZTFmYjVjZDUwMTliODk0MTdmlCVBog==: 00:27:59.137 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: ]] 00:27:59.137 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: 00:27:59.137 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:59.137 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.137 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.137 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:59.137 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:59.137 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.137 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:59.137 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.137 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.137 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.137 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.137 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.137 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.137 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.137 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.137 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.137 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.137 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.137 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.137 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.137 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.137 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:59.137 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.137 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.137 nvme0n1 00:27:59.137 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.397 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.397 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.397 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.397 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.397 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.397 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.397 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.397 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.397 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.397 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.397 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.397 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:59.397 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.397 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.397 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:59.397 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:59.397 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWQ0Nzk2NmU3NjRmMTJiMWU1OGMxNTY0YjA5ODg4YmU98iLh: 00:27:59.397 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: 00:27:59.398 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.398 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:59.398 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWQ0Nzk2NmU3NjRmMTJiMWU1OGMxNTY0YjA5ODg4YmU98iLh: 00:27:59.398 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: ]] 00:27:59.398 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: 00:27:59.398 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:59.398 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.398 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.398 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:59.398 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:59.398 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.398 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:59.398 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.398 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.398 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.398 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.398 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.398 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.398 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.398 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.398 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.398 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.398 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.398 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.398 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.398 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.398 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:59.398 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.398 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.398 nvme0n1 00:27:59.658 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.658 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.658 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.658 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGZmMDgyYWU2ZDE3MDg4NGU5MTMzNDgwNWZiZWU4MjFkMWUxNDdmMTk5YmM3Y2FmvaldYg==: 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGZmMDgyYWU2ZDE3MDg4NGU5MTMzNDgwNWZiZWU4MjFkMWUxNDdmMTk5YmM3Y2FmvaldYg==: 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: ]] 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.659 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.920 nvme0n1 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWQ4ZWI0MDFkNmQyMmNkNGQ4MDQ1NTNhMmRmOGU0ODljOGI2N2ExZjE0NjM2OTJjODQxNjVhMThjMTU0NjQ2Oc//n7k=: 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWQ4ZWI0MDFkNmQyMmNkNGQ4MDQ1NTNhMmRmOGU0ODljOGI2N2ExZjE0NjM2OTJjODQxNjVhMThjMTU0NjQ2Oc//n7k=: 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.920 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.181 nvme0n1 00:28:00.181 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.181 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.181 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.181 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.181 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.181 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.181 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.181 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.181 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.181 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.181 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.181 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:00.181 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.181 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:00.181 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.181 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:00.181 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:00.181 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:00.181 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWE4YTc1ZDU4ZjYzNDI4MDZmZDZhOGE2YTU4M2U0N2XB5muN: 00:28:00.181 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: 00:28:00.181 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:00.181 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:00.181 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWE4YTc1ZDU4ZjYzNDI4MDZmZDZhOGE2YTU4M2U0N2XB5muN: 00:28:00.181 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: ]] 00:28:00.181 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: 00:28:00.181 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:00.181 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.181 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:00.181 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:00.181 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:00.181 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.181 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:00.181 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.181 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.182 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.182 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.182 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.182 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.182 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.182 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.182 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.182 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.182 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.182 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.182 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.182 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.182 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:00.182 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.182 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.443 nvme0n1 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNjMzY0NzJjYWY5ZjRmYmFiNmVhMzJhMjNjOWVkZTFmYjVjZDUwMTliODk0MTdmlCVBog==: 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNjMzY0NzJjYWY5ZjRmYmFiNmVhMzJhMjNjOWVkZTFmYjVjZDUwMTliODk0MTdmlCVBog==: 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: ]] 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.443 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.704 nvme0n1 00:28:00.704 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.704 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.704 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.704 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.704 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.704 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.704 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.704 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.704 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.704 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.704 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.704 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.704 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:00.704 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.704 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:00.704 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:00.704 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:00.704 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWQ0Nzk2NmU3NjRmMTJiMWU1OGMxNTY0YjA5ODg4YmU98iLh: 00:28:00.704 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: 00:28:00.704 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:00.704 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:00.704 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWQ0Nzk2NmU3NjRmMTJiMWU1OGMxNTY0YjA5ODg4YmU98iLh: 00:28:00.704 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: ]] 00:28:00.704 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: 00:28:00.704 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:00.705 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.705 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:00.705 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:00.705 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:00.705 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.705 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:00.705 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.705 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.705 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.705 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.705 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.705 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.705 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.705 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.705 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.705 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.705 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.705 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.705 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.705 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.705 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:00.705 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.705 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.966 nvme0n1 00:28:00.966 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.966 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.966 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.966 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.966 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.966 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGZmMDgyYWU2ZDE3MDg4NGU5MTMzNDgwNWZiZWU4MjFkMWUxNDdmMTk5YmM3Y2FmvaldYg==: 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGZmMDgyYWU2ZDE3MDg4NGU5MTMzNDgwNWZiZWU4MjFkMWUxNDdmMTk5YmM3Y2FmvaldYg==: 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: ]] 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.227 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.488 nvme0n1 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWQ4ZWI0MDFkNmQyMmNkNGQ4MDQ1NTNhMmRmOGU0ODljOGI2N2ExZjE0NjM2OTJjODQxNjVhMThjMTU0NjQ2Oc//n7k=: 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWQ4ZWI0MDFkNmQyMmNkNGQ4MDQ1NTNhMmRmOGU0ODljOGI2N2ExZjE0NjM2OTJjODQxNjVhMThjMTU0NjQ2Oc//n7k=: 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.488 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.750 nvme0n1 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWE4YTc1ZDU4ZjYzNDI4MDZmZDZhOGE2YTU4M2U0N2XB5muN: 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWE4YTc1ZDU4ZjYzNDI4MDZmZDZhOGE2YTU4M2U0N2XB5muN: 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: ]] 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.750 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.323 nvme0n1 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNjMzY0NzJjYWY5ZjRmYmFiNmVhMzJhMjNjOWVkZTFmYjVjZDUwMTliODk0MTdmlCVBog==: 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNjMzY0NzJjYWY5ZjRmYmFiNmVhMzJhMjNjOWVkZTFmYjVjZDUwMTliODk0MTdmlCVBog==: 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: ]] 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.323 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.896 nvme0n1 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWQ0Nzk2NmU3NjRmMTJiMWU1OGMxNTY0YjA5ODg4YmU98iLh: 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWQ0Nzk2NmU3NjRmMTJiMWU1OGMxNTY0YjA5ODg4YmU98iLh: 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: ]] 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.896 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.157 nvme0n1 00:28:03.157 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.157 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.157 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.157 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.157 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.157 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.157 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.157 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.157 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.157 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGZmMDgyYWU2ZDE3MDg4NGU5MTMzNDgwNWZiZWU4MjFkMWUxNDdmMTk5YmM3Y2FmvaldYg==: 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGZmMDgyYWU2ZDE3MDg4NGU5MTMzNDgwNWZiZWU4MjFkMWUxNDdmMTk5YmM3Y2FmvaldYg==: 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: ]] 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.419 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.680 nvme0n1 00:28:03.680 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.680 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.680 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.680 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.680 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.680 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.680 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.680 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.680 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.680 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.680 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.680 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.680 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:03.680 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.680 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:03.680 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:03.680 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:03.680 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWQ4ZWI0MDFkNmQyMmNkNGQ4MDQ1NTNhMmRmOGU0ODljOGI2N2ExZjE0NjM2OTJjODQxNjVhMThjMTU0NjQ2Oc//n7k=: 00:28:03.680 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:03.680 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:03.680 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:03.680 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWQ4ZWI0MDFkNmQyMmNkNGQ4MDQ1NTNhMmRmOGU0ODljOGI2N2ExZjE0NjM2OTJjODQxNjVhMThjMTU0NjQ2Oc//n7k=: 00:28:03.680 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:03.680 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:03.680 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.680 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:03.680 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:03.680 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:03.940 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.940 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:03.940 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.940 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.940 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.940 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.940 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:03.941 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:03.941 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:03.941 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.941 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.941 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:03.941 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.941 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:03.941 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:03.941 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:03.941 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:03.941 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.941 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.201 nvme0n1 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWE4YTc1ZDU4ZjYzNDI4MDZmZDZhOGE2YTU4M2U0N2XB5muN: 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWE4YTc1ZDU4ZjYzNDI4MDZmZDZhOGE2YTU4M2U0N2XB5muN: 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: ]] 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.201 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.143 nvme0n1 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNjMzY0NzJjYWY5ZjRmYmFiNmVhMzJhMjNjOWVkZTFmYjVjZDUwMTliODk0MTdmlCVBog==: 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNjMzY0NzJjYWY5ZjRmYmFiNmVhMzJhMjNjOWVkZTFmYjVjZDUwMTliODk0MTdmlCVBog==: 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: ]] 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.143 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.714 nvme0n1 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWQ0Nzk2NmU3NjRmMTJiMWU1OGMxNTY0YjA5ODg4YmU98iLh: 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWQ0Nzk2NmU3NjRmMTJiMWU1OGMxNTY0YjA5ODg4YmU98iLh: 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: ]] 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.714 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.285 nvme0n1 00:28:06.285 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.547 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.547 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.547 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.547 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.547 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.547 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.547 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.547 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.547 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.547 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.547 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.547 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:06.547 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.547 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:06.547 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:06.547 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:06.547 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGZmMDgyYWU2ZDE3MDg4NGU5MTMzNDgwNWZiZWU4MjFkMWUxNDdmMTk5YmM3Y2FmvaldYg==: 00:28:06.548 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: 00:28:06.548 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:06.548 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:06.548 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGZmMDgyYWU2ZDE3MDg4NGU5MTMzNDgwNWZiZWU4MjFkMWUxNDdmMTk5YmM3Y2FmvaldYg==: 00:28:06.548 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: ]] 00:28:06.548 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: 00:28:06.548 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:06.548 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.548 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:06.548 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:06.548 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:06.548 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.548 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:06.548 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.548 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.548 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.548 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.548 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:06.548 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:06.548 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:06.548 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.548 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.548 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:06.548 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.548 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:06.548 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:06.548 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:06.549 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:06.549 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.549 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.121 nvme0n1 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWQ4ZWI0MDFkNmQyMmNkNGQ4MDQ1NTNhMmRmOGU0ODljOGI2N2ExZjE0NjM2OTJjODQxNjVhMThjMTU0NjQ2Oc//n7k=: 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWQ4ZWI0MDFkNmQyMmNkNGQ4MDQ1NTNhMmRmOGU0ODljOGI2N2ExZjE0NjM2OTJjODQxNjVhMThjMTU0NjQ2Oc//n7k=: 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.122 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.063 nvme0n1 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWE4YTc1ZDU4ZjYzNDI4MDZmZDZhOGE2YTU4M2U0N2XB5muN: 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWE4YTc1ZDU4ZjYzNDI4MDZmZDZhOGE2YTU4M2U0N2XB5muN: 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: ]] 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.063 nvme0n1 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNjMzY0NzJjYWY5ZjRmYmFiNmVhMzJhMjNjOWVkZTFmYjVjZDUwMTliODk0MTdmlCVBog==: 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNjMzY0NzJjYWY5ZjRmYmFiNmVhMzJhMjNjOWVkZTFmYjVjZDUwMTliODk0MTdmlCVBog==: 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: ]] 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.063 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:08.064 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.064 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.324 nvme0n1 00:28:08.324 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.324 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.324 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.324 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.324 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.324 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.324 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.324 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.324 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.324 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.324 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.324 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.324 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:08.324 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.324 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:08.324 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:08.324 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:08.324 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWQ0Nzk2NmU3NjRmMTJiMWU1OGMxNTY0YjA5ODg4YmU98iLh: 00:28:08.324 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: 00:28:08.324 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:08.324 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:08.324 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWQ0Nzk2NmU3NjRmMTJiMWU1OGMxNTY0YjA5ODg4YmU98iLh: 00:28:08.324 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: ]] 00:28:08.324 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: 00:28:08.324 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:08.324 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.324 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:08.324 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:08.324 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:08.324 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.324 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:08.324 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.324 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.324 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.324 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.324 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.324 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.324 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.324 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.324 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.324 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.324 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.324 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.324 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.324 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.324 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:08.324 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.324 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.585 nvme0n1 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGZmMDgyYWU2ZDE3MDg4NGU5MTMzNDgwNWZiZWU4MjFkMWUxNDdmMTk5YmM3Y2FmvaldYg==: 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGZmMDgyYWU2ZDE3MDg4NGU5MTMzNDgwNWZiZWU4MjFkMWUxNDdmMTk5YmM3Y2FmvaldYg==: 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: ]] 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.585 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.846 nvme0n1 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWQ4ZWI0MDFkNmQyMmNkNGQ4MDQ1NTNhMmRmOGU0ODljOGI2N2ExZjE0NjM2OTJjODQxNjVhMThjMTU0NjQ2Oc//n7k=: 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWQ4ZWI0MDFkNmQyMmNkNGQ4MDQ1NTNhMmRmOGU0ODljOGI2N2ExZjE0NjM2OTJjODQxNjVhMThjMTU0NjQ2Oc//n7k=: 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.846 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.108 nvme0n1 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWE4YTc1ZDU4ZjYzNDI4MDZmZDZhOGE2YTU4M2U0N2XB5muN: 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWE4YTc1ZDU4ZjYzNDI4MDZmZDZhOGE2YTU4M2U0N2XB5muN: 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: ]] 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.108 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.369 nvme0n1 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNjMzY0NzJjYWY5ZjRmYmFiNmVhMzJhMjNjOWVkZTFmYjVjZDUwMTliODk0MTdmlCVBog==: 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNjMzY0NzJjYWY5ZjRmYmFiNmVhMzJhMjNjOWVkZTFmYjVjZDUwMTliODk0MTdmlCVBog==: 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: ]] 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.369 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.629 nvme0n1 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWQ0Nzk2NmU3NjRmMTJiMWU1OGMxNTY0YjA5ODg4YmU98iLh: 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWQ0Nzk2NmU3NjRmMTJiMWU1OGMxNTY0YjA5ODg4YmU98iLh: 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: ]] 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.629 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.889 nvme0n1 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGZmMDgyYWU2ZDE3MDg4NGU5MTMzNDgwNWZiZWU4MjFkMWUxNDdmMTk5YmM3Y2FmvaldYg==: 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGZmMDgyYWU2ZDE3MDg4NGU5MTMzNDgwNWZiZWU4MjFkMWUxNDdmMTk5YmM3Y2FmvaldYg==: 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: ]] 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.889 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.150 nvme0n1 00:28:10.150 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.150 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.150 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.150 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.150 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.150 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.150 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.150 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.150 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.150 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.150 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.150 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.150 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:10.150 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.150 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.150 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:10.150 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:10.150 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWQ4ZWI0MDFkNmQyMmNkNGQ4MDQ1NTNhMmRmOGU0ODljOGI2N2ExZjE0NjM2OTJjODQxNjVhMThjMTU0NjQ2Oc//n7k=: 00:28:10.150 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:10.150 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.150 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:10.150 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWQ4ZWI0MDFkNmQyMmNkNGQ4MDQ1NTNhMmRmOGU0ODljOGI2N2ExZjE0NjM2OTJjODQxNjVhMThjMTU0NjQ2Oc//n7k=: 00:28:10.150 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:10.150 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:10.150 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.150 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.150 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:10.150 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:10.150 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.150 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:10.150 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.150 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.151 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.151 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.151 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:10.151 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:10.151 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:10.151 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.151 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.151 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:10.151 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.151 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:10.151 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:10.151 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:10.151 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:10.151 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.151 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.411 nvme0n1 00:28:10.411 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.411 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.411 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.411 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.411 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.411 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.411 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.411 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.411 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.411 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.411 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.411 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:10.411 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.411 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:10.411 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.411 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.411 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:10.411 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:10.411 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWE4YTc1ZDU4ZjYzNDI4MDZmZDZhOGE2YTU4M2U0N2XB5muN: 00:28:10.411 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: 00:28:10.411 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.411 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:10.411 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWE4YTc1ZDU4ZjYzNDI4MDZmZDZhOGE2YTU4M2U0N2XB5muN: 00:28:10.411 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: ]] 00:28:10.411 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: 00:28:10.411 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:10.411 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.411 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.411 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:10.411 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:10.411 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.411 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:10.411 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.411 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.411 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.411 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.411 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:10.412 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:10.412 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:10.412 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.412 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.412 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:10.412 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.412 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:10.412 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:10.412 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:10.412 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:10.412 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.412 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.672 nvme0n1 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNjMzY0NzJjYWY5ZjRmYmFiNmVhMzJhMjNjOWVkZTFmYjVjZDUwMTliODk0MTdmlCVBog==: 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNjMzY0NzJjYWY5ZjRmYmFiNmVhMzJhMjNjOWVkZTFmYjVjZDUwMTliODk0MTdmlCVBog==: 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: ]] 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.672 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.933 nvme0n1 00:28:10.933 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.933 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.933 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.933 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.933 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.933 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.193 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.193 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.193 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.193 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.193 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.193 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.193 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:11.193 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.193 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:11.193 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:11.193 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:11.193 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWQ0Nzk2NmU3NjRmMTJiMWU1OGMxNTY0YjA5ODg4YmU98iLh: 00:28:11.193 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: 00:28:11.193 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.193 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:11.193 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWQ0Nzk2NmU3NjRmMTJiMWU1OGMxNTY0YjA5ODg4YmU98iLh: 00:28:11.193 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: ]] 00:28:11.193 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: 00:28:11.193 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:11.193 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.193 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.193 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:11.193 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:11.193 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.194 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:11.194 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.194 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.194 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.194 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.194 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:11.194 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:11.194 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:11.194 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.194 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.194 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:11.194 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.194 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:11.194 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:11.194 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:11.194 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:11.194 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.194 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.454 nvme0n1 00:28:11.454 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.454 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.454 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.454 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.454 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGZmMDgyYWU2ZDE3MDg4NGU5MTMzNDgwNWZiZWU4MjFkMWUxNDdmMTk5YmM3Y2FmvaldYg==: 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGZmMDgyYWU2ZDE3MDg4NGU5MTMzNDgwNWZiZWU4MjFkMWUxNDdmMTk5YmM3Y2FmvaldYg==: 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: ]] 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.454 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.715 nvme0n1 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWQ4ZWI0MDFkNmQyMmNkNGQ4MDQ1NTNhMmRmOGU0ODljOGI2N2ExZjE0NjM2OTJjODQxNjVhMThjMTU0NjQ2Oc//n7k=: 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWQ4ZWI0MDFkNmQyMmNkNGQ4MDQ1NTNhMmRmOGU0ODljOGI2N2ExZjE0NjM2OTJjODQxNjVhMThjMTU0NjQ2Oc//n7k=: 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.715 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.976 nvme0n1 00:28:11.976 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.976 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.976 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.976 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.976 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.976 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.976 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.976 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.976 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.976 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWE4YTc1ZDU4ZjYzNDI4MDZmZDZhOGE2YTU4M2U0N2XB5muN: 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWE4YTc1ZDU4ZjYzNDI4MDZmZDZhOGE2YTU4M2U0N2XB5muN: 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: ]] 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.236 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.497 nvme0n1 00:28:12.497 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.497 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.497 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.497 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.497 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.497 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.497 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.497 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.497 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.497 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.497 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.497 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.497 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:12.497 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.497 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:12.497 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:12.497 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:12.497 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNjMzY0NzJjYWY5ZjRmYmFiNmVhMzJhMjNjOWVkZTFmYjVjZDUwMTliODk0MTdmlCVBog==: 00:28:12.497 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: 00:28:12.497 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:12.497 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:12.498 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNjMzY0NzJjYWY5ZjRmYmFiNmVhMzJhMjNjOWVkZTFmYjVjZDUwMTliODk0MTdmlCVBog==: 00:28:12.498 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: ]] 00:28:12.498 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: 00:28:12.498 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:12.498 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.498 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:12.498 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:12.498 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:12.498 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.498 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:12.498 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.498 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.770 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.770 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.770 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:12.770 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:12.770 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:12.770 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.770 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.770 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:12.770 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.770 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:12.770 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:12.770 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:12.770 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:12.770 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.770 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.056 nvme0n1 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWQ0Nzk2NmU3NjRmMTJiMWU1OGMxNTY0YjA5ODg4YmU98iLh: 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWQ0Nzk2NmU3NjRmMTJiMWU1OGMxNTY0YjA5ODg4YmU98iLh: 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: ]] 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.056 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.631 nvme0n1 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGZmMDgyYWU2ZDE3MDg4NGU5MTMzNDgwNWZiZWU4MjFkMWUxNDdmMTk5YmM3Y2FmvaldYg==: 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGZmMDgyYWU2ZDE3MDg4NGU5MTMzNDgwNWZiZWU4MjFkMWUxNDdmMTk5YmM3Y2FmvaldYg==: 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: ]] 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.631 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.203 nvme0n1 00:28:14.203 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWQ4ZWI0MDFkNmQyMmNkNGQ4MDQ1NTNhMmRmOGU0ODljOGI2N2ExZjE0NjM2OTJjODQxNjVhMThjMTU0NjQ2Oc//n7k=: 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWQ4ZWI0MDFkNmQyMmNkNGQ4MDQ1NTNhMmRmOGU0ODljOGI2N2ExZjE0NjM2OTJjODQxNjVhMThjMTU0NjQ2Oc//n7k=: 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.204 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.465 nvme0n1 00:28:14.465 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.465 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.465 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.465 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.465 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.465 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.465 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.465 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.465 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.465 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWE4YTc1ZDU4ZjYzNDI4MDZmZDZhOGE2YTU4M2U0N2XB5muN: 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWE4YTc1ZDU4ZjYzNDI4MDZmZDZhOGE2YTU4M2U0N2XB5muN: 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: ]] 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjYxMjA3ZjQ1MmM1YTIzMWE0ZWFjM2U0Nzc4YjNjNzFiYTM4NDJlNjQwOWZmY2ZjN2I3NzE1MzBhYmJhZDk0N9GlyNE=: 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.726 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.297 nvme0n1 00:28:15.297 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.297 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.297 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.297 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.297 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.297 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.297 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.297 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.297 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.297 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.297 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.297 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.297 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:15.297 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.297 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.297 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:15.297 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:15.297 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNjMzY0NzJjYWY5ZjRmYmFiNmVhMzJhMjNjOWVkZTFmYjVjZDUwMTliODk0MTdmlCVBog==: 00:28:15.298 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: 00:28:15.298 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.298 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:15.298 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNjMzY0NzJjYWY5ZjRmYmFiNmVhMzJhMjNjOWVkZTFmYjVjZDUwMTliODk0MTdmlCVBog==: 00:28:15.298 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: ]] 00:28:15.298 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: 00:28:15.298 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:15.298 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.298 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.298 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:15.298 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:15.298 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.298 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:15.298 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.298 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.298 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.298 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.298 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:15.298 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:15.298 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:15.298 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.298 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.298 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:15.298 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.298 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:15.298 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:15.298 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:15.298 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:15.298 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.298 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.868 nvme0n1 00:28:15.868 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.868 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWQ0Nzk2NmU3NjRmMTJiMWU1OGMxNTY0YjA5ODg4YmU98iLh: 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWQ0Nzk2NmU3NjRmMTJiMWU1OGMxNTY0YjA5ODg4YmU98iLh: 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: ]] 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.130 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.701 nvme0n1 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGZmMDgyYWU2ZDE3MDg4NGU5MTMzNDgwNWZiZWU4MjFkMWUxNDdmMTk5YmM3Y2FmvaldYg==: 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGZmMDgyYWU2ZDE3MDg4NGU5MTMzNDgwNWZiZWU4MjFkMWUxNDdmMTk5YmM3Y2FmvaldYg==: 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: ]] 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTgzOTYwMTI2ZTFmOTEwNTZkNGU5OGI5YWVlMGQ3OGRwdd5T: 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.701 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.643 nvme0n1 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWQ4ZWI0MDFkNmQyMmNkNGQ4MDQ1NTNhMmRmOGU0ODljOGI2N2ExZjE0NjM2OTJjODQxNjVhMThjMTU0NjQ2Oc//n7k=: 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWQ4ZWI0MDFkNmQyMmNkNGQ4MDQ1NTNhMmRmOGU0ODljOGI2N2ExZjE0NjM2OTJjODQxNjVhMThjMTU0NjQ2Oc//n7k=: 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.643 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.216 nvme0n1 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNjMzY0NzJjYWY5ZjRmYmFiNmVhMzJhMjNjOWVkZTFmYjVjZDUwMTliODk0MTdmlCVBog==: 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNjMzY0NzJjYWY5ZjRmYmFiNmVhMzJhMjNjOWVkZTFmYjVjZDUwMTliODk0MTdmlCVBog==: 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: ]] 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:18.216 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:18.217 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:18.217 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:18.217 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:18.217 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:18.217 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:18.217 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:18.217 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:18.217 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:18.217 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.217 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.217 request: 00:28:18.217 { 00:28:18.217 "name": "nvme0", 00:28:18.217 "trtype": "tcp", 00:28:18.217 "traddr": "10.0.0.1", 00:28:18.217 "adrfam": "ipv4", 00:28:18.217 "trsvcid": "4420", 00:28:18.217 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:18.217 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:18.217 "prchk_reftag": false, 00:28:18.217 "prchk_guard": false, 00:28:18.217 "hdgst": false, 00:28:18.217 "ddgst": false, 00:28:18.217 "allow_unrecognized_csi": false, 00:28:18.217 "method": "bdev_nvme_attach_controller", 00:28:18.217 "req_id": 1 00:28:18.217 } 00:28:18.217 Got JSON-RPC error response 00:28:18.217 response: 00:28:18.217 { 00:28:18.217 "code": -5, 00:28:18.217 "message": "Input/output error" 00:28:18.217 } 00:28:18.217 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:18.217 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:18.217 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:18.217 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:18.217 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:18.217 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.217 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:18.217 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.217 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.217 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.479 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:18.479 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:18.479 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:18.479 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:18.479 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:18.479 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.480 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.480 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:18.480 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.480 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:18.480 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:18.480 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:18.480 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:18.480 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:18.480 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:18.480 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:18.480 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:18.480 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:18.480 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:18.480 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:18.480 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.480 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.480 request: 00:28:18.480 { 00:28:18.480 "name": "nvme0", 00:28:18.480 "trtype": "tcp", 00:28:18.480 "traddr": "10.0.0.1", 00:28:18.480 "adrfam": "ipv4", 00:28:18.480 "trsvcid": "4420", 00:28:18.480 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:18.480 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:18.480 "prchk_reftag": false, 00:28:18.480 "prchk_guard": false, 00:28:18.480 "hdgst": false, 00:28:18.480 "ddgst": false, 00:28:18.480 "dhchap_key": "key2", 00:28:18.480 "allow_unrecognized_csi": false, 00:28:18.480 "method": "bdev_nvme_attach_controller", 00:28:18.480 "req_id": 1 00:28:18.480 } 00:28:18.480 Got JSON-RPC error response 00:28:18.480 response: 00:28:18.480 { 00:28:18.480 "code": -5, 00:28:18.480 "message": "Input/output error" 00:28:18.480 } 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.480 request: 00:28:18.480 { 00:28:18.480 "name": "nvme0", 00:28:18.480 "trtype": "tcp", 00:28:18.480 "traddr": "10.0.0.1", 00:28:18.480 "adrfam": "ipv4", 00:28:18.480 "trsvcid": "4420", 00:28:18.480 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:18.480 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:18.480 "prchk_reftag": false, 00:28:18.480 "prchk_guard": false, 00:28:18.480 "hdgst": false, 00:28:18.480 "ddgst": false, 00:28:18.480 "dhchap_key": "key1", 00:28:18.480 "dhchap_ctrlr_key": "ckey2", 00:28:18.480 "allow_unrecognized_csi": false, 00:28:18.480 "method": "bdev_nvme_attach_controller", 00:28:18.480 "req_id": 1 00:28:18.480 } 00:28:18.480 Got JSON-RPC error response 00:28:18.480 response: 00:28:18.480 { 00:28:18.480 "code": -5, 00:28:18.480 "message": "Input/output error" 00:28:18.480 } 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.480 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.741 nvme0n1 00:28:18.741 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.741 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:18.741 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.742 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:18.742 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:18.742 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:18.742 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWQ0Nzk2NmU3NjRmMTJiMWU1OGMxNTY0YjA5ODg4YmU98iLh: 00:28:18.742 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: 00:28:18.742 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:18.742 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:18.742 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWQ0Nzk2NmU3NjRmMTJiMWU1OGMxNTY0YjA5ODg4YmU98iLh: 00:28:18.742 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: ]] 00:28:18.742 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: 00:28:18.742 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:18.742 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.742 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.742 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.742 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.742 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:28:18.742 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.742 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.742 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.742 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.742 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:18.742 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:18.742 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:18.742 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:18.742 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:18.742 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:18.742 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:18.742 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:18.742 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.742 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.003 request: 00:28:19.003 { 00:28:19.003 "name": "nvme0", 00:28:19.003 "dhchap_key": "key1", 00:28:19.003 "dhchap_ctrlr_key": "ckey2", 00:28:19.003 "method": "bdev_nvme_set_keys", 00:28:19.003 "req_id": 1 00:28:19.003 } 00:28:19.003 Got JSON-RPC error response 00:28:19.003 response: 00:28:19.003 { 00:28:19.003 "code": -13, 00:28:19.003 "message": "Permission denied" 00:28:19.003 } 00:28:19.003 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:19.003 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:19.003 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:19.003 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:19.003 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:19.003 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.003 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:19.003 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.003 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.003 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.003 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:19.003 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:19.945 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.945 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:19.945 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.945 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.945 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.945 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:19.945 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:21.327 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.327 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:21.327 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.327 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.327 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.327 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:28:21.327 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:21.327 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.327 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:21.327 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:21.327 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:21.327 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTNjMzY0NzJjYWY5ZjRmYmFiNmVhMzJhMjNjOWVkZTFmYjVjZDUwMTliODk0MTdmlCVBog==: 00:28:21.327 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: 00:28:21.327 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:21.327 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:21.327 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTNjMzY0NzJjYWY5ZjRmYmFiNmVhMzJhMjNjOWVkZTFmYjVjZDUwMTliODk0MTdmlCVBog==: 00:28:21.327 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: ]] 00:28:21.327 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzQzYTMyYjQ2ZTE4ZDY5ZjNmMjQ1NmMxYTE3NTkwMTQ4YjBhODJiOWI0YThiMjk2BLZAEg==: 00:28:21.327 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:28:21.327 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:21.327 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:21.327 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:21.327 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.327 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.327 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:21.327 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.327 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:21.327 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:21.327 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.328 nvme0n1 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWQ0Nzk2NmU3NjRmMTJiMWU1OGMxNTY0YjA5ODg4YmU98iLh: 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWQ0Nzk2NmU3NjRmMTJiMWU1OGMxNTY0YjA5ODg4YmU98iLh: 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: ]] 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZThmZjYwY2NlNDYzNmVmYzIyMjgwYzNkOWE5MjFmODEmOJY2: 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.328 request: 00:28:21.328 { 00:28:21.328 "name": "nvme0", 00:28:21.328 "dhchap_key": "key2", 00:28:21.328 "dhchap_ctrlr_key": "ckey1", 00:28:21.328 "method": "bdev_nvme_set_keys", 00:28:21.328 "req_id": 1 00:28:21.328 } 00:28:21.328 Got JSON-RPC error response 00:28:21.328 response: 00:28:21.328 { 00:28:21.328 "code": -13, 00:28:21.328 "message": "Permission denied" 00:28:21.328 } 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:28:21.328 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:28:22.269 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.269 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:22.269 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.269 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.269 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.269 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:28:22.269 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:28:22.269 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:28:22.269 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:22.269 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:22.269 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:28:22.531 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:22.531 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:28:22.531 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:22.531 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:22.531 rmmod nvme_tcp 00:28:22.531 rmmod nvme_fabrics 00:28:22.531 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:22.531 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:28:22.531 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:28:22.531 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2889341 ']' 00:28:22.531 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2889341 00:28:22.531 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2889341 ']' 00:28:22.531 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2889341 00:28:22.531 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:28:22.531 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:22.531 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2889341 00:28:22.531 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:22.531 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:22.531 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2889341' 00:28:22.531 killing process with pid 2889341 00:28:22.531 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2889341 00:28:22.531 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2889341 00:28:22.531 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:22.531 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:22.531 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:22.531 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:28:22.531 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:28:22.531 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:22.531 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:28:22.531 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:22.531 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:22.531 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.531 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:22.531 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.079 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:25.079 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:25.079 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:25.079 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:25.079 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:25.079 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:28:25.079 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:25.079 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:25.079 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:25.079 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:25.079 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:25.079 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:25.079 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:28.378 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:28.378 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:28.378 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:28.378 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:28.378 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:28.378 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:28.378 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:28.378 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:28.378 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:28.378 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:28.378 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:28.378 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:28.378 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:28.378 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:28.378 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:28.378 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:28.378 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:28.950 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.mwE /tmp/spdk.key-null.vdO /tmp/spdk.key-sha256.JrF /tmp/spdk.key-sha384.yIX /tmp/spdk.key-sha512.p4s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:28.950 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:32.254 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:32.254 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:32.254 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:32.254 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:32.254 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:32.254 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:32.254 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:32.254 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:32.254 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:32.254 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:32.254 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:32.254 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:32.254 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:32.254 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:32.254 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:32.254 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:32.254 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:32.513 00:28:32.513 real 1m0.366s 00:28:32.513 user 0m54.194s 00:28:32.513 sys 0m15.916s 00:28:32.513 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:32.513 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.513 ************************************ 00:28:32.513 END TEST nvmf_auth_host 00:28:32.513 ************************************ 00:28:32.774 11:29:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:32.774 11:29:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:32.774 11:29:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:32.774 11:29:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:32.774 11:29:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.774 ************************************ 00:28:32.774 START TEST nvmf_digest 00:28:32.775 ************************************ 00:28:32.775 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:32.775 * Looking for test storage... 00:28:32.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:32.775 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:32.775 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:32.775 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:28:33.036 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:33.036 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:33.036 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:33.036 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:33.036 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:28:33.036 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:28:33.036 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:28:33.036 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:28:33.036 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:28:33.036 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:28:33.036 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:28:33.036 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:33.036 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:28:33.036 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:28:33.036 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:33.036 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:33.036 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:28:33.036 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:28:33.036 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:33.036 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:28:33.036 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:28:33.036 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:28:33.036 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:28:33.036 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:33.036 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:28:33.036 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:28:33.036 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:33.036 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:33.036 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:28:33.036 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:33.036 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:33.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.036 --rc genhtml_branch_coverage=1 00:28:33.036 --rc genhtml_function_coverage=1 00:28:33.036 --rc genhtml_legend=1 00:28:33.036 --rc geninfo_all_blocks=1 00:28:33.037 --rc geninfo_unexecuted_blocks=1 00:28:33.037 00:28:33.037 ' 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:33.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.037 --rc genhtml_branch_coverage=1 00:28:33.037 --rc genhtml_function_coverage=1 00:28:33.037 --rc genhtml_legend=1 00:28:33.037 --rc geninfo_all_blocks=1 00:28:33.037 --rc geninfo_unexecuted_blocks=1 00:28:33.037 00:28:33.037 ' 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:33.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.037 --rc genhtml_branch_coverage=1 00:28:33.037 --rc genhtml_function_coverage=1 00:28:33.037 --rc genhtml_legend=1 00:28:33.037 --rc geninfo_all_blocks=1 00:28:33.037 --rc geninfo_unexecuted_blocks=1 00:28:33.037 00:28:33.037 ' 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:33.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.037 --rc genhtml_branch_coverage=1 00:28:33.037 --rc genhtml_function_coverage=1 00:28:33.037 --rc genhtml_legend=1 00:28:33.037 --rc geninfo_all_blocks=1 00:28:33.037 --rc geninfo_unexecuted_blocks=1 00:28:33.037 00:28:33.037 ' 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:33.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:28:33.037 11:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:41.182 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:41.182 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:28:41.182 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:41.182 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:41.182 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:41.182 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:41.182 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:41.182 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:28:41.182 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:41.182 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:28:41.182 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:28:41.182 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:28:41.182 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:28:41.182 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:28:41.182 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:28:41.182 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:41.182 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:41.182 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:41.182 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:41.182 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:41.182 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:41.182 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:41.183 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:41.183 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:41.183 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:41.183 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:41.183 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:41.183 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:41.183 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:41.183 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:41.183 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:41.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:41.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.536 ms 00:28:41.183 00:28:41.183 --- 10.0.0.2 ping statistics --- 00:28:41.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.183 rtt min/avg/max/mdev = 0.536/0.536/0.536/0.000 ms 00:28:41.183 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:41.183 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:41.183 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:28:41.183 00:28:41.183 --- 10.0.0.1 ping statistics --- 00:28:41.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.183 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:28:41.183 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:41.183 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:28:41.183 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:41.183 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:41.183 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:41.183 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:41.183 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:41.183 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:41.183 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:41.183 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:41.183 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:41.183 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:41.183 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:41.183 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:41.184 ************************************ 00:28:41.184 START TEST nvmf_digest_clean 00:28:41.184 ************************************ 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2906312 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2906312 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2906312 ']' 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:41.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:41.184 [2024-11-20 11:29:33.209017] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:28:41.184 [2024-11-20 11:29:33.209079] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:41.184 [2024-11-20 11:29:33.289141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.184 [2024-11-20 11:29:33.335147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:41.184 [2024-11-20 11:29:33.335203] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:41.184 [2024-11-20 11:29:33.335210] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:41.184 [2024-11-20 11:29:33.335215] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:41.184 [2024-11-20 11:29:33.335220] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:41.184 [2024-11-20 11:29:33.335906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:41.184 null0 00:28:41.184 [2024-11-20 11:29:33.535922] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:41.184 [2024-11-20 11:29:33.560237] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2906338 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2906338 /var/tmp/bperf.sock 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2906338 ']' 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:41.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:41.184 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:41.184 [2024-11-20 11:29:33.621653] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:28:41.184 [2024-11-20 11:29:33.621714] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2906338 ] 00:28:41.184 [2024-11-20 11:29:33.712837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.184 [2024-11-20 11:29:33.765152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:41.756 11:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:41.756 11:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:41.756 11:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:41.756 11:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:41.756 11:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:42.017 11:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:42.017 11:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:42.590 nvme0n1 00:28:42.590 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:42.590 11:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:42.590 Running I/O for 2 seconds... 00:28:44.478 19987.00 IOPS, 78.07 MiB/s [2024-11-20T10:29:37.220Z] 20755.00 IOPS, 81.07 MiB/s 00:28:44.478 Latency(us) 00:28:44.478 [2024-11-20T10:29:37.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:44.478 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:44.478 nvme0n1 : 2.00 20773.83 81.15 0.00 0.00 6154.96 2798.93 20753.07 00:28:44.478 [2024-11-20T10:29:37.220Z] =================================================================================================================== 00:28:44.478 [2024-11-20T10:29:37.220Z] Total : 20773.83 81.15 0.00 0.00 6154.96 2798.93 20753.07 00:28:44.478 { 00:28:44.478 "results": [ 00:28:44.478 { 00:28:44.478 "job": "nvme0n1", 00:28:44.478 "core_mask": "0x2", 00:28:44.478 "workload": "randread", 00:28:44.478 "status": "finished", 00:28:44.478 "queue_depth": 128, 00:28:44.478 "io_size": 4096, 00:28:44.478 "runtime": 2.00329, 00:28:44.478 "iops": 20773.827054495356, 00:28:44.478 "mibps": 81.14776193162248, 00:28:44.478 "io_failed": 0, 00:28:44.478 "io_timeout": 0, 00:28:44.478 "avg_latency_us": 6154.961117518903, 00:28:44.478 "min_latency_us": 2798.9333333333334, 00:28:44.478 "max_latency_us": 20753.066666666666 00:28:44.478 } 00:28:44.478 ], 00:28:44.478 "core_count": 1 00:28:44.478 } 00:28:44.479 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:44.479 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:44.479 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:44.479 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:44.479 | select(.opcode=="crc32c") 00:28:44.479 | "\(.module_name) \(.executed)"' 00:28:44.479 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:44.739 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:44.739 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:44.739 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:44.739 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:44.739 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2906338 00:28:44.739 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2906338 ']' 00:28:44.739 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2906338 00:28:44.739 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:44.739 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:44.739 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2906338 00:28:44.739 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:44.739 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:44.739 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2906338' 00:28:44.739 killing process with pid 2906338 00:28:44.739 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2906338 00:28:44.739 Received shutdown signal, test time was about 2.000000 seconds 00:28:44.739 00:28:44.739 Latency(us) 00:28:44.739 [2024-11-20T10:29:37.481Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:44.739 [2024-11-20T10:29:37.481Z] =================================================================================================================== 00:28:44.739 [2024-11-20T10:29:37.481Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:44.739 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2906338 00:28:44.999 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:44.999 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:44.999 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:44.999 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:44.999 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:44.999 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:44.999 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:44.999 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2907073 00:28:44.999 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2907073 /var/tmp/bperf.sock 00:28:44.999 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2907073 ']' 00:28:44.999 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:44.999 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:44.999 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:44.999 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:44.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:44.999 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:44.999 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:44.999 [2024-11-20 11:29:37.626073] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:28:44.999 [2024-11-20 11:29:37.626128] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2907073 ] 00:28:44.999 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:44.999 Zero copy mechanism will not be used. 00:28:44.999 [2024-11-20 11:29:37.708137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.999 [2024-11-20 11:29:37.737439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.938 11:29:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:45.938 11:29:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:45.938 11:29:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:45.938 11:29:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:45.938 11:29:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:45.938 11:29:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:45.938 11:29:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:46.508 nvme0n1 00:28:46.508 11:29:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:46.508 11:29:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:46.508 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:46.508 Zero copy mechanism will not be used. 00:28:46.508 Running I/O for 2 seconds... 00:28:48.390 4771.00 IOPS, 596.38 MiB/s [2024-11-20T10:29:41.132Z] 4094.50 IOPS, 511.81 MiB/s 00:28:48.390 Latency(us) 00:28:48.390 [2024-11-20T10:29:41.132Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:48.390 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:48.390 nvme0n1 : 2.01 4091.31 511.41 0.00 0.00 3908.23 563.20 7645.87 00:28:48.390 [2024-11-20T10:29:41.132Z] =================================================================================================================== 00:28:48.390 [2024-11-20T10:29:41.132Z] Total : 4091.31 511.41 0.00 0.00 3908.23 563.20 7645.87 00:28:48.390 { 00:28:48.390 "results": [ 00:28:48.390 { 00:28:48.390 "job": "nvme0n1", 00:28:48.390 "core_mask": "0x2", 00:28:48.390 "workload": "randread", 00:28:48.390 "status": "finished", 00:28:48.390 "queue_depth": 16, 00:28:48.390 "io_size": 131072, 00:28:48.390 "runtime": 2.005471, 00:28:48.390 "iops": 4091.308226346828, 00:28:48.390 "mibps": 511.4135282933535, 00:28:48.390 "io_failed": 0, 00:28:48.390 "io_timeout": 0, 00:28:48.390 "avg_latency_us": 3908.2269380459074, 00:28:48.390 "min_latency_us": 563.2, 00:28:48.390 "max_latency_us": 7645.866666666667 00:28:48.390 } 00:28:48.390 ], 00:28:48.390 "core_count": 1 00:28:48.390 } 00:28:48.390 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:48.390 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:48.390 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:48.390 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:48.390 | select(.opcode=="crc32c") 00:28:48.390 | "\(.module_name) \(.executed)"' 00:28:48.390 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:48.652 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:48.652 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:48.652 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:48.652 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:48.652 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2907073 00:28:48.652 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2907073 ']' 00:28:48.652 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2907073 00:28:48.652 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:48.652 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:48.652 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2907073 00:28:48.652 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:48.652 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:48.652 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2907073' 00:28:48.652 killing process with pid 2907073 00:28:48.652 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2907073 00:28:48.652 Received shutdown signal, test time was about 2.000000 seconds 00:28:48.652 00:28:48.652 Latency(us) 00:28:48.652 [2024-11-20T10:29:41.394Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:48.652 [2024-11-20T10:29:41.394Z] =================================================================================================================== 00:28:48.652 [2024-11-20T10:29:41.394Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:48.652 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2907073 00:28:48.914 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:48.914 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:48.914 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:48.914 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:48.914 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:48.914 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:48.914 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:48.914 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2907909 00:28:48.914 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2907909 /var/tmp/bperf.sock 00:28:48.914 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2907909 ']' 00:28:48.914 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:48.914 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:48.914 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:48.914 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:48.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:48.914 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:48.914 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:48.914 [2024-11-20 11:29:41.494738] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:28:48.914 [2024-11-20 11:29:41.494799] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2907909 ] 00:28:48.914 [2024-11-20 11:29:41.578586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.914 [2024-11-20 11:29:41.608270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:49.859 11:29:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:49.859 11:29:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:49.859 11:29:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:49.859 11:29:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:49.859 11:29:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:49.859 11:29:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:49.859 11:29:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:50.432 nvme0n1 00:28:50.432 11:29:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:50.432 11:29:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:50.432 Running I/O for 2 seconds... 00:28:52.343 30331.00 IOPS, 118.48 MiB/s [2024-11-20T10:29:45.085Z] 30465.50 IOPS, 119.01 MiB/s 00:28:52.343 Latency(us) 00:28:52.343 [2024-11-20T10:29:45.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.343 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:52.343 nvme0n1 : 2.00 30478.05 119.05 0.00 0.00 4195.10 2075.31 12069.55 00:28:52.343 [2024-11-20T10:29:45.085Z] =================================================================================================================== 00:28:52.343 [2024-11-20T10:29:45.085Z] Total : 30478.05 119.05 0.00 0.00 4195.10 2075.31 12069.55 00:28:52.343 { 00:28:52.343 "results": [ 00:28:52.343 { 00:28:52.343 "job": "nvme0n1", 00:28:52.343 "core_mask": "0x2", 00:28:52.343 "workload": "randwrite", 00:28:52.343 "status": "finished", 00:28:52.343 "queue_depth": 128, 00:28:52.343 "io_size": 4096, 00:28:52.343 "runtime": 2.003376, 00:28:52.343 "iops": 30478.05304645758, 00:28:52.343 "mibps": 119.05489471272492, 00:28:52.343 "io_failed": 0, 00:28:52.343 "io_timeout": 0, 00:28:52.343 "avg_latency_us": 4195.102887371231, 00:28:52.343 "min_latency_us": 2075.306666666667, 00:28:52.343 "max_latency_us": 12069.546666666667 00:28:52.343 } 00:28:52.343 ], 00:28:52.343 "core_count": 1 00:28:52.343 } 00:28:52.343 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:52.343 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:52.343 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:52.604 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:52.604 | select(.opcode=="crc32c") 00:28:52.604 | "\(.module_name) \(.executed)"' 00:28:52.604 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:52.604 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:52.604 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:52.604 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:52.604 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:52.604 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2907909 00:28:52.604 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2907909 ']' 00:28:52.604 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2907909 00:28:52.604 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:52.604 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:52.604 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2907909 00:28:52.604 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:52.604 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:52.604 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2907909' 00:28:52.604 killing process with pid 2907909 00:28:52.604 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2907909 00:28:52.604 Received shutdown signal, test time was about 2.000000 seconds 00:28:52.604 00:28:52.604 Latency(us) 00:28:52.604 [2024-11-20T10:29:45.346Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.604 [2024-11-20T10:29:45.346Z] =================================================================================================================== 00:28:52.604 [2024-11-20T10:29:45.346Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:52.605 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2907909 00:28:52.865 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:52.865 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:52.865 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:52.865 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:52.865 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:52.865 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:52.865 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:52.865 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2908705 00:28:52.865 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2908705 /var/tmp/bperf.sock 00:28:52.865 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2908705 ']' 00:28:52.865 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:52.865 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:52.865 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:52.865 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:52.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:52.865 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:52.865 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:52.865 [2024-11-20 11:29:45.486234] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:28:52.866 [2024-11-20 11:29:45.486290] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2908705 ] 00:28:52.866 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:52.866 Zero copy mechanism will not be used. 00:28:52.866 [2024-11-20 11:29:45.568787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.866 [2024-11-20 11:29:45.596737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:53.807 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:53.807 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:53.807 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:53.807 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:53.807 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:53.807 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:53.807 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:54.379 nvme0n1 00:28:54.379 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:54.379 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:54.379 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:54.379 Zero copy mechanism will not be used. 00:28:54.379 Running I/O for 2 seconds... 00:28:56.257 5836.00 IOPS, 729.50 MiB/s [2024-11-20T10:29:48.999Z] 5263.50 IOPS, 657.94 MiB/s 00:28:56.257 Latency(us) 00:28:56.257 [2024-11-20T10:29:48.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.257 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:56.257 nvme0n1 : 2.00 5265.17 658.15 0.00 0.00 3034.79 1290.24 14308.69 00:28:56.257 [2024-11-20T10:29:48.999Z] =================================================================================================================== 00:28:56.257 [2024-11-20T10:29:48.999Z] Total : 5265.17 658.15 0.00 0.00 3034.79 1290.24 14308.69 00:28:56.257 { 00:28:56.257 "results": [ 00:28:56.257 { 00:28:56.257 "job": "nvme0n1", 00:28:56.257 "core_mask": "0x2", 00:28:56.257 "workload": "randwrite", 00:28:56.257 "status": "finished", 00:28:56.257 "queue_depth": 16, 00:28:56.257 "io_size": 131072, 00:28:56.257 "runtime": 2.003164, 00:28:56.257 "iops": 5265.170500268575, 00:28:56.257 "mibps": 658.1463125335719, 00:28:56.257 "io_failed": 0, 00:28:56.257 "io_timeout": 0, 00:28:56.257 "avg_latency_us": 3034.787643879776, 00:28:56.257 "min_latency_us": 1290.24, 00:28:56.257 "max_latency_us": 14308.693333333333 00:28:56.257 } 00:28:56.257 ], 00:28:56.257 "core_count": 1 00:28:56.257 } 00:28:56.257 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:56.257 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:56.257 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:56.258 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:56.258 | select(.opcode=="crc32c") 00:28:56.258 | "\(.module_name) \(.executed)"' 00:28:56.258 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:56.517 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:56.517 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:56.517 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:56.517 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:56.517 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2908705 00:28:56.517 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2908705 ']' 00:28:56.517 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2908705 00:28:56.517 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:56.517 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:56.517 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2908705 00:28:56.517 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:56.517 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:56.777 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2908705' 00:28:56.777 killing process with pid 2908705 00:28:56.777 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2908705 00:28:56.777 Received shutdown signal, test time was about 2.000000 seconds 00:28:56.777 00:28:56.777 Latency(us) 00:28:56.777 [2024-11-20T10:29:49.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.777 [2024-11-20T10:29:49.519Z] =================================================================================================================== 00:28:56.777 [2024-11-20T10:29:49.519Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:56.777 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2908705 00:28:56.777 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2906312 00:28:56.777 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2906312 ']' 00:28:56.777 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2906312 00:28:56.777 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:56.777 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:56.777 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2906312 00:28:56.777 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:56.777 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:56.777 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2906312' 00:28:56.777 killing process with pid 2906312 00:28:56.777 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2906312 00:28:56.777 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2906312 00:28:57.037 00:28:57.037 real 0m16.385s 00:28:57.037 user 0m32.944s 00:28:57.037 sys 0m3.695s 00:28:57.037 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:57.037 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:57.037 ************************************ 00:28:57.037 END TEST nvmf_digest_clean 00:28:57.037 ************************************ 00:28:57.037 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:57.037 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:57.037 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:57.037 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:57.037 ************************************ 00:28:57.037 START TEST nvmf_digest_error 00:28:57.037 ************************************ 00:28:57.037 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:28:57.037 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:57.037 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:57.037 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:57.037 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:57.037 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2909419 00:28:57.037 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2909419 00:28:57.037 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:57.037 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2909419 ']' 00:28:57.037 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:57.037 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:57.037 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:57.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:57.037 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:57.037 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:57.037 [2024-11-20 11:29:49.676897] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:28:57.037 [2024-11-20 11:29:49.676950] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:57.037 [2024-11-20 11:29:49.768096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.298 [2024-11-20 11:29:49.799835] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:57.298 [2024-11-20 11:29:49.799864] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:57.298 [2024-11-20 11:29:49.799869] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:57.298 [2024-11-20 11:29:49.799874] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:57.298 [2024-11-20 11:29:49.799878] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:57.298 [2024-11-20 11:29:49.800341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:57.870 11:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:57.870 11:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:57.870 11:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:57.870 11:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:57.870 11:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:57.870 11:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:57.870 11:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:57.870 11:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.870 11:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:57.870 [2024-11-20 11:29:50.506281] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:57.870 11:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.870 11:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:57.870 11:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:57.870 11:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.870 11:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:57.870 null0 00:28:57.870 [2024-11-20 11:29:50.583833] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:57.870 [2024-11-20 11:29:50.608030] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:58.131 11:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.131 11:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:58.131 11:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:58.131 11:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:58.131 11:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:58.131 11:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:58.131 11:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2909766 00:28:58.131 11:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2909766 /var/tmp/bperf.sock 00:28:58.131 11:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2909766 ']' 00:28:58.131 11:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:58.131 11:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:58.131 11:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:58.131 11:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:58.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:58.131 11:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:58.131 11:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:58.131 [2024-11-20 11:29:50.664788] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:28:58.131 [2024-11-20 11:29:50.664837] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2909766 ] 00:28:58.131 [2024-11-20 11:29:50.745837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.131 [2024-11-20 11:29:50.775575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.074 11:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:59.074 11:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:59.074 11:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:59.074 11:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:59.074 11:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:59.074 11:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.074 11:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:59.074 11:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.074 11:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:59.074 11:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:59.336 nvme0n1 00:28:59.336 11:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:59.336 11:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.336 11:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:59.336 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.336 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:59.336 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:59.598 Running I/O for 2 seconds... 00:28:59.598 [2024-11-20 11:29:52.112269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.598 [2024-11-20 11:29:52.112299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.598 [2024-11-20 11:29:52.112308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.598 [2024-11-20 11:29:52.124064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.598 [2024-11-20 11:29:52.124088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.598 [2024-11-20 11:29:52.124095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.598 [2024-11-20 11:29:52.135171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.598 [2024-11-20 11:29:52.135189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.598 [2024-11-20 11:29:52.135196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.598 [2024-11-20 11:29:52.144922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.598 [2024-11-20 11:29:52.144939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.598 [2024-11-20 11:29:52.144946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.598 [2024-11-20 11:29:52.154986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.598 [2024-11-20 11:29:52.155004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.598 [2024-11-20 11:29:52.155010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.598 [2024-11-20 11:29:52.163455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.598 [2024-11-20 11:29:52.163472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.598 [2024-11-20 11:29:52.163478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.598 [2024-11-20 11:29:52.173571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.598 [2024-11-20 11:29:52.173588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.598 [2024-11-20 11:29:52.173595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.598 [2024-11-20 11:29:52.182064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.598 [2024-11-20 11:29:52.182081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.598 [2024-11-20 11:29:52.182087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.598 [2024-11-20 11:29:52.192238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.598 [2024-11-20 11:29:52.192256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.598 [2024-11-20 11:29:52.192262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.598 [2024-11-20 11:29:52.203219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.598 [2024-11-20 11:29:52.203236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.598 [2024-11-20 11:29:52.203242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.598 [2024-11-20 11:29:52.213360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.598 [2024-11-20 11:29:52.213376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.598 [2024-11-20 11:29:52.213382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.598 [2024-11-20 11:29:52.222795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.598 [2024-11-20 11:29:52.222812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.598 [2024-11-20 11:29:52.222818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.598 [2024-11-20 11:29:52.230293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.598 [2024-11-20 11:29:52.230309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.598 [2024-11-20 11:29:52.230316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.598 [2024-11-20 11:29:52.240792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.598 [2024-11-20 11:29:52.240809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.598 [2024-11-20 11:29:52.240816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.598 [2024-11-20 11:29:52.248814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.598 [2024-11-20 11:29:52.248832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.598 [2024-11-20 11:29:52.248838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.598 [2024-11-20 11:29:52.259131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.598 [2024-11-20 11:29:52.259147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.598 [2024-11-20 11:29:52.259154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.598 [2024-11-20 11:29:52.266567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.598 [2024-11-20 11:29:52.266584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.598 [2024-11-20 11:29:52.266590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.598 [2024-11-20 11:29:52.277284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.598 [2024-11-20 11:29:52.277301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.598 [2024-11-20 11:29:52.277308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.598 [2024-11-20 11:29:52.286528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.598 [2024-11-20 11:29:52.286545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.598 [2024-11-20 11:29:52.286554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.598 [2024-11-20 11:29:52.297084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.598 [2024-11-20 11:29:52.297101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.598 [2024-11-20 11:29:52.297108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.598 [2024-11-20 11:29:52.305868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.598 [2024-11-20 11:29:52.305885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.598 [2024-11-20 11:29:52.305891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.598 [2024-11-20 11:29:52.315048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.598 [2024-11-20 11:29:52.315065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.598 [2024-11-20 11:29:52.315072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.598 [2024-11-20 11:29:52.324462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.598 [2024-11-20 11:29:52.324478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.599 [2024-11-20 11:29:52.324484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.599 [2024-11-20 11:29:52.334245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.599 [2024-11-20 11:29:52.334262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.599 [2024-11-20 11:29:52.334268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.860 [2024-11-20 11:29:52.342637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.860 [2024-11-20 11:29:52.342654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.860 [2024-11-20 11:29:52.342660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.860 [2024-11-20 11:29:52.350822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.860 [2024-11-20 11:29:52.350838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.860 [2024-11-20 11:29:52.350845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.860 [2024-11-20 11:29:52.360473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.860 [2024-11-20 11:29:52.360490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.860 [2024-11-20 11:29:52.360496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.860 [2024-11-20 11:29:52.369868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.860 [2024-11-20 11:29:52.369885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.860 [2024-11-20 11:29:52.369892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.860 [2024-11-20 11:29:52.377087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.860 [2024-11-20 11:29:52.377104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.860 [2024-11-20 11:29:52.377110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.860 [2024-11-20 11:29:52.387838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.860 [2024-11-20 11:29:52.387855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.860 [2024-11-20 11:29:52.387861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.860 [2024-11-20 11:29:52.397214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.860 [2024-11-20 11:29:52.397230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.860 [2024-11-20 11:29:52.397236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.860 [2024-11-20 11:29:52.406239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.860 [2024-11-20 11:29:52.406256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.860 [2024-11-20 11:29:52.406262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.860 [2024-11-20 11:29:52.415412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.860 [2024-11-20 11:29:52.415429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.860 [2024-11-20 11:29:52.415436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.860 [2024-11-20 11:29:52.423872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.860 [2024-11-20 11:29:52.423889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.860 [2024-11-20 11:29:52.423895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.860 [2024-11-20 11:29:52.433153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.860 [2024-11-20 11:29:52.433173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.860 [2024-11-20 11:29:52.433180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.860 [2024-11-20 11:29:52.441850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.860 [2024-11-20 11:29:52.441867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.860 [2024-11-20 11:29:52.441877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.860 [2024-11-20 11:29:52.449365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.860 [2024-11-20 11:29:52.449382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.860 [2024-11-20 11:29:52.449388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.860 [2024-11-20 11:29:52.458608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.860 [2024-11-20 11:29:52.458625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.860 [2024-11-20 11:29:52.458631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.860 [2024-11-20 11:29:52.469210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.860 [2024-11-20 11:29:52.469227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.860 [2024-11-20 11:29:52.469233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.860 [2024-11-20 11:29:52.478378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.860 [2024-11-20 11:29:52.478394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.860 [2024-11-20 11:29:52.478400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.860 [2024-11-20 11:29:52.486801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.860 [2024-11-20 11:29:52.486817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.860 [2024-11-20 11:29:52.486823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.860 [2024-11-20 11:29:52.495610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.860 [2024-11-20 11:29:52.495627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.860 [2024-11-20 11:29:52.495633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.860 [2024-11-20 11:29:52.504112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.860 [2024-11-20 11:29:52.504129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.860 [2024-11-20 11:29:52.504135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.860 [2024-11-20 11:29:52.512960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.860 [2024-11-20 11:29:52.512977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.860 [2024-11-20 11:29:52.512983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.860 [2024-11-20 11:29:52.522426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.860 [2024-11-20 11:29:52.522446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.860 [2024-11-20 11:29:52.522452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.860 [2024-11-20 11:29:52.530992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.860 [2024-11-20 11:29:52.531009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.861 [2024-11-20 11:29:52.531015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.861 [2024-11-20 11:29:52.541270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.861 [2024-11-20 11:29:52.541287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.861 [2024-11-20 11:29:52.541293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.861 [2024-11-20 11:29:52.551146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.861 [2024-11-20 11:29:52.551166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.861 [2024-11-20 11:29:52.551173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.861 [2024-11-20 11:29:52.559543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.861 [2024-11-20 11:29:52.559559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.861 [2024-11-20 11:29:52.559566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.861 [2024-11-20 11:29:52.568228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.861 [2024-11-20 11:29:52.568245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.861 [2024-11-20 11:29:52.568251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.861 [2024-11-20 11:29:52.577218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.861 [2024-11-20 11:29:52.577235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.861 [2024-11-20 11:29:52.577241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.861 [2024-11-20 11:29:52.586773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.861 [2024-11-20 11:29:52.586790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.861 [2024-11-20 11:29:52.586796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.861 [2024-11-20 11:29:52.595950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:28:59.861 [2024-11-20 11:29:52.595967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.861 [2024-11-20 11:29:52.595973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.124 [2024-11-20 11:29:52.603747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.124 [2024-11-20 11:29:52.603764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.124 [2024-11-20 11:29:52.603771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.124 [2024-11-20 11:29:52.613646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.124 [2024-11-20 11:29:52.613663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.124 [2024-11-20 11:29:52.613669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.124 [2024-11-20 11:29:52.622955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.124 [2024-11-20 11:29:52.622972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.124 [2024-11-20 11:29:52.622978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.124 [2024-11-20 11:29:52.630623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.124 [2024-11-20 11:29:52.630639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.124 [2024-11-20 11:29:52.630646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.124 [2024-11-20 11:29:52.640409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.124 [2024-11-20 11:29:52.640425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.124 [2024-11-20 11:29:52.640431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.124 [2024-11-20 11:29:52.648783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.124 [2024-11-20 11:29:52.648799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.124 [2024-11-20 11:29:52.648806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.124 [2024-11-20 11:29:52.658361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.124 [2024-11-20 11:29:52.658378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.124 [2024-11-20 11:29:52.658384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.124 [2024-11-20 11:29:52.668163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.124 [2024-11-20 11:29:52.668179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.124 [2024-11-20 11:29:52.668185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.124 [2024-11-20 11:29:52.677292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.124 [2024-11-20 11:29:52.677309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.124 [2024-11-20 11:29:52.677318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.124 [2024-11-20 11:29:52.685586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.124 [2024-11-20 11:29:52.685602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.124 [2024-11-20 11:29:52.685608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.124 [2024-11-20 11:29:52.694468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.124 [2024-11-20 11:29:52.694484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.124 [2024-11-20 11:29:52.694490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.124 [2024-11-20 11:29:52.702635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.124 [2024-11-20 11:29:52.702652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.124 [2024-11-20 11:29:52.702658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.124 [2024-11-20 11:29:52.712269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.124 [2024-11-20 11:29:52.712285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.124 [2024-11-20 11:29:52.712291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.124 [2024-11-20 11:29:52.721212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.124 [2024-11-20 11:29:52.721228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.124 [2024-11-20 11:29:52.721234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.124 [2024-11-20 11:29:52.729275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.124 [2024-11-20 11:29:52.729292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.124 [2024-11-20 11:29:52.729298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.124 [2024-11-20 11:29:52.738964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.124 [2024-11-20 11:29:52.738980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.124 [2024-11-20 11:29:52.738986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.124 [2024-11-20 11:29:52.748822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.124 [2024-11-20 11:29:52.748839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.124 [2024-11-20 11:29:52.748845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.124 [2024-11-20 11:29:52.756202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.124 [2024-11-20 11:29:52.756222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.124 [2024-11-20 11:29:52.756228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.124 [2024-11-20 11:29:52.767046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.124 [2024-11-20 11:29:52.767063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.124 [2024-11-20 11:29:52.767069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.124 [2024-11-20 11:29:52.776066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.125 [2024-11-20 11:29:52.776082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.125 [2024-11-20 11:29:52.776088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.125 [2024-11-20 11:29:52.784992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.125 [2024-11-20 11:29:52.785009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.125 [2024-11-20 11:29:52.785015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.125 [2024-11-20 11:29:52.793601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.125 [2024-11-20 11:29:52.793618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.125 [2024-11-20 11:29:52.793624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.125 [2024-11-20 11:29:52.803031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.125 [2024-11-20 11:29:52.803048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.125 [2024-11-20 11:29:52.803055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.125 [2024-11-20 11:29:52.813540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.125 [2024-11-20 11:29:52.813556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.125 [2024-11-20 11:29:52.813562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.125 [2024-11-20 11:29:52.823701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.125 [2024-11-20 11:29:52.823718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.125 [2024-11-20 11:29:52.823725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.125 [2024-11-20 11:29:52.832604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.125 [2024-11-20 11:29:52.832621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.125 [2024-11-20 11:29:52.832627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.125 [2024-11-20 11:29:52.841243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.125 [2024-11-20 11:29:52.841260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.125 [2024-11-20 11:29:52.841266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.125 [2024-11-20 11:29:52.849826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.125 [2024-11-20 11:29:52.849843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.125 [2024-11-20 11:29:52.849849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.125 [2024-11-20 11:29:52.858740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.125 [2024-11-20 11:29:52.858757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.125 [2024-11-20 11:29:52.858763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.386 [2024-11-20 11:29:52.867970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.386 [2024-11-20 11:29:52.867986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.386 [2024-11-20 11:29:52.867992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.386 [2024-11-20 11:29:52.875758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.386 [2024-11-20 11:29:52.875774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.386 [2024-11-20 11:29:52.875780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.386 [2024-11-20 11:29:52.884605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.386 [2024-11-20 11:29:52.884622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.386 [2024-11-20 11:29:52.884628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.386 [2024-11-20 11:29:52.893602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.386 [2024-11-20 11:29:52.893619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.386 [2024-11-20 11:29:52.893626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.386 [2024-11-20 11:29:52.903421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.387 [2024-11-20 11:29:52.903438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.387 [2024-11-20 11:29:52.903444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.387 [2024-11-20 11:29:52.912186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.387 [2024-11-20 11:29:52.912203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.387 [2024-11-20 11:29:52.912213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.387 [2024-11-20 11:29:52.921398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.387 [2024-11-20 11:29:52.921415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.387 [2024-11-20 11:29:52.921421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.387 [2024-11-20 11:29:52.930209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.387 [2024-11-20 11:29:52.930226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.387 [2024-11-20 11:29:52.930232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.387 [2024-11-20 11:29:52.938918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.387 [2024-11-20 11:29:52.938935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.387 [2024-11-20 11:29:52.938941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.387 [2024-11-20 11:29:52.947943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.387 [2024-11-20 11:29:52.947960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.387 [2024-11-20 11:29:52.947967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.387 [2024-11-20 11:29:52.957782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.387 [2024-11-20 11:29:52.957798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.387 [2024-11-20 11:29:52.957804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.387 [2024-11-20 11:29:52.966019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.387 [2024-11-20 11:29:52.966036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.387 [2024-11-20 11:29:52.966042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.387 [2024-11-20 11:29:52.975786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.387 [2024-11-20 11:29:52.975803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.387 [2024-11-20 11:29:52.975809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.387 [2024-11-20 11:29:52.983821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.387 [2024-11-20 11:29:52.983839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.387 [2024-11-20 11:29:52.983847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.387 [2024-11-20 11:29:52.992935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.387 [2024-11-20 11:29:52.992953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.387 [2024-11-20 11:29:52.992959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.387 [2024-11-20 11:29:53.002714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.387 [2024-11-20 11:29:53.002731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.387 [2024-11-20 11:29:53.002738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.387 [2024-11-20 11:29:53.012305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.387 [2024-11-20 11:29:53.012323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.387 [2024-11-20 11:29:53.012329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.387 [2024-11-20 11:29:53.020997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.387 [2024-11-20 11:29:53.021013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.387 [2024-11-20 11:29:53.021019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.387 [2024-11-20 11:29:53.029524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.387 [2024-11-20 11:29:53.029540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.387 [2024-11-20 11:29:53.029546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.387 [2024-11-20 11:29:53.037744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.387 [2024-11-20 11:29:53.037759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.387 [2024-11-20 11:29:53.037765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.387 [2024-11-20 11:29:53.046793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.387 [2024-11-20 11:29:53.046810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.387 [2024-11-20 11:29:53.046816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.387 [2024-11-20 11:29:53.057258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.387 [2024-11-20 11:29:53.057283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.387 [2024-11-20 11:29:53.057289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.387 [2024-11-20 11:29:53.066800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.387 [2024-11-20 11:29:53.066817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.387 [2024-11-20 11:29:53.066826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.387 [2024-11-20 11:29:53.076399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.387 [2024-11-20 11:29:53.076416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.387 [2024-11-20 11:29:53.076422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.387 [2024-11-20 11:29:53.084100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.387 [2024-11-20 11:29:53.084116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.387 [2024-11-20 11:29:53.084122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.387 27507.00 IOPS, 107.45 MiB/s [2024-11-20T10:29:53.129Z] [2024-11-20 11:29:53.094081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.387 [2024-11-20 11:29:53.094095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.387 [2024-11-20 11:29:53.094101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.387 [2024-11-20 11:29:53.103489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.387 [2024-11-20 11:29:53.103506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.387 [2024-11-20 11:29:53.103512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.387 [2024-11-20 11:29:53.112049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.387 [2024-11-20 11:29:53.112066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.387 [2024-11-20 11:29:53.112072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.387 [2024-11-20 11:29:53.121816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.387 [2024-11-20 11:29:53.121832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.387 [2024-11-20 11:29:53.121839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.649 [2024-11-20 11:29:53.130559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.649 [2024-11-20 11:29:53.130576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.649 [2024-11-20 11:29:53.130582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.649 [2024-11-20 11:29:53.138668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.649 [2024-11-20 11:29:53.138685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.649 [2024-11-20 11:29:53.138692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.649 [2024-11-20 11:29:53.147271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.649 [2024-11-20 11:29:53.147290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.649 [2024-11-20 11:29:53.147297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.649 [2024-11-20 11:29:53.156386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.649 [2024-11-20 11:29:53.156403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.649 [2024-11-20 11:29:53.156409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.649 [2024-11-20 11:29:53.165199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.649 [2024-11-20 11:29:53.165216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.649 [2024-11-20 11:29:53.165222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.649 [2024-11-20 11:29:53.174450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.649 [2024-11-20 11:29:53.174466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.649 [2024-11-20 11:29:53.174473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.649 [2024-11-20 11:29:53.183996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.649 [2024-11-20 11:29:53.184013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.649 [2024-11-20 11:29:53.184019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.649 [2024-11-20 11:29:53.194309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.649 [2024-11-20 11:29:53.194327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.649 [2024-11-20 11:29:53.194333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.649 [2024-11-20 11:29:53.201858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.649 [2024-11-20 11:29:53.201875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.649 [2024-11-20 11:29:53.201882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.649 [2024-11-20 11:29:53.211259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.649 [2024-11-20 11:29:53.211276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.649 [2024-11-20 11:29:53.211283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.649 [2024-11-20 11:29:53.220142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.649 [2024-11-20 11:29:53.220163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.649 [2024-11-20 11:29:53.220170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.649 [2024-11-20 11:29:53.228425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.649 [2024-11-20 11:29:53.228441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.649 [2024-11-20 11:29:53.228448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.649 [2024-11-20 11:29:53.239582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.649 [2024-11-20 11:29:53.239599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.649 [2024-11-20 11:29:53.239606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.649 [2024-11-20 11:29:53.249301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.649 [2024-11-20 11:29:53.249318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.649 [2024-11-20 11:29:53.249326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.649 [2024-11-20 11:29:53.256574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.649 [2024-11-20 11:29:53.256591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.649 [2024-11-20 11:29:53.256597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.649 [2024-11-20 11:29:53.266755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.649 [2024-11-20 11:29:53.266772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.649 [2024-11-20 11:29:53.266778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.649 [2024-11-20 11:29:53.275404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.649 [2024-11-20 11:29:53.275421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.649 [2024-11-20 11:29:53.275427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.649 [2024-11-20 11:29:53.284021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.649 [2024-11-20 11:29:53.284038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.649 [2024-11-20 11:29:53.284044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.649 [2024-11-20 11:29:53.292773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.649 [2024-11-20 11:29:53.292790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.649 [2024-11-20 11:29:53.292796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.650 [2024-11-20 11:29:53.301369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.650 [2024-11-20 11:29:53.301387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.650 [2024-11-20 11:29:53.301396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.650 [2024-11-20 11:29:53.311821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.650 [2024-11-20 11:29:53.311838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.650 [2024-11-20 11:29:53.311845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.650 [2024-11-20 11:29:53.319141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.650 [2024-11-20 11:29:53.319157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.650 [2024-11-20 11:29:53.319168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.650 [2024-11-20 11:29:53.328765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.650 [2024-11-20 11:29:53.328782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.650 [2024-11-20 11:29:53.328788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.650 [2024-11-20 11:29:53.338282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.650 [2024-11-20 11:29:53.338299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.650 [2024-11-20 11:29:53.338305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.650 [2024-11-20 11:29:53.345752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.650 [2024-11-20 11:29:53.345769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.650 [2024-11-20 11:29:53.345775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.650 [2024-11-20 11:29:53.355557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.650 [2024-11-20 11:29:53.355574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.650 [2024-11-20 11:29:53.355580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.650 [2024-11-20 11:29:53.365599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.650 [2024-11-20 11:29:53.365616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.650 [2024-11-20 11:29:53.365623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.650 [2024-11-20 11:29:53.373050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.650 [2024-11-20 11:29:53.373067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.650 [2024-11-20 11:29:53.373073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.650 [2024-11-20 11:29:53.382663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.650 [2024-11-20 11:29:53.382683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.650 [2024-11-20 11:29:53.382690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-20 11:29:53.394715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.913 [2024-11-20 11:29:53.394731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-20 11:29:53.394737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-20 11:29:53.407866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.913 [2024-11-20 11:29:53.407883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-20 11:29:53.407889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-20 11:29:53.417911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.913 [2024-11-20 11:29:53.417928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-20 11:29:53.417934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-20 11:29:53.427778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.913 [2024-11-20 11:29:53.427795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-20 11:29:53.427801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-20 11:29:53.436592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.913 [2024-11-20 11:29:53.436608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-20 11:29:53.436614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-20 11:29:53.445597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.913 [2024-11-20 11:29:53.445615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-20 11:29:53.445622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-20 11:29:53.454617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.913 [2024-11-20 11:29:53.454634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-20 11:29:53.454640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-20 11:29:53.462820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.913 [2024-11-20 11:29:53.462838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-20 11:29:53.462844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-20 11:29:53.471648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.913 [2024-11-20 11:29:53.471665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-20 11:29:53.471671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-20 11:29:53.481257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.913 [2024-11-20 11:29:53.481274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-20 11:29:53.481280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-20 11:29:53.489882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.913 [2024-11-20 11:29:53.489898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-20 11:29:53.489904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-20 11:29:53.497772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.913 [2024-11-20 11:29:53.497788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-20 11:29:53.497795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-20 11:29:53.508517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.913 [2024-11-20 11:29:53.508534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-20 11:29:53.508540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-20 11:29:53.518606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.913 [2024-11-20 11:29:53.518624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-20 11:29:53.518630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-20 11:29:53.527071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.913 [2024-11-20 11:29:53.527087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-20 11:29:53.527093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-20 11:29:53.536035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.913 [2024-11-20 11:29:53.536052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-20 11:29:53.536058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-20 11:29:53.544924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.913 [2024-11-20 11:29:53.544944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-20 11:29:53.544951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-20 11:29:53.553282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.913 [2024-11-20 11:29:53.553299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-20 11:29:53.553305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-20 11:29:53.561713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.913 [2024-11-20 11:29:53.561730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-20 11:29:53.561736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-20 11:29:53.570661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.913 [2024-11-20 11:29:53.570678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.913 [2024-11-20 11:29:53.570684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.913 [2024-11-20 11:29:53.579451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.913 [2024-11-20 11:29:53.579467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.914 [2024-11-20 11:29:53.579474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.914 [2024-11-20 11:29:53.588447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.914 [2024-11-20 11:29:53.588464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.914 [2024-11-20 11:29:53.588470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.914 [2024-11-20 11:29:53.597316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.914 [2024-11-20 11:29:53.597333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.914 [2024-11-20 11:29:53.597339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.914 [2024-11-20 11:29:53.606072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.914 [2024-11-20 11:29:53.606089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.914 [2024-11-20 11:29:53.606095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.914 [2024-11-20 11:29:53.615035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.914 [2024-11-20 11:29:53.615052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.914 [2024-11-20 11:29:53.615058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.914 [2024-11-20 11:29:53.624015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.914 [2024-11-20 11:29:53.624031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.914 [2024-11-20 11:29:53.624038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.914 [2024-11-20 11:29:53.633318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.914 [2024-11-20 11:29:53.633334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.914 [2024-11-20 11:29:53.633341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.914 [2024-11-20 11:29:53.641896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.914 [2024-11-20 11:29:53.641913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.914 [2024-11-20 11:29:53.641920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.914 [2024-11-20 11:29:53.650840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:00.914 [2024-11-20 11:29:53.650856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.914 [2024-11-20 11:29:53.650862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.177 [2024-11-20 11:29:53.660341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.177 [2024-11-20 11:29:53.660358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.177 [2024-11-20 11:29:53.660364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.177 [2024-11-20 11:29:53.668405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.177 [2024-11-20 11:29:53.668421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.177 [2024-11-20 11:29:53.668428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.177 [2024-11-20 11:29:53.677801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.177 [2024-11-20 11:29:53.677817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.177 [2024-11-20 11:29:53.677823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.177 [2024-11-20 11:29:53.686133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.177 [2024-11-20 11:29:53.686149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.177 [2024-11-20 11:29:53.686155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.177 [2024-11-20 11:29:53.695428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.177 [2024-11-20 11:29:53.695446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.177 [2024-11-20 11:29:53.695455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.177 [2024-11-20 11:29:53.704749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.177 [2024-11-20 11:29:53.704765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.177 [2024-11-20 11:29:53.704772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.177 [2024-11-20 11:29:53.715108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.177 [2024-11-20 11:29:53.715124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.177 [2024-11-20 11:29:53.715130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.177 [2024-11-20 11:29:53.723171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.177 [2024-11-20 11:29:53.723188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.177 [2024-11-20 11:29:53.723194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.177 [2024-11-20 11:29:53.733532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.177 [2024-11-20 11:29:53.733549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.177 [2024-11-20 11:29:53.733556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.177 [2024-11-20 11:29:53.741337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.177 [2024-11-20 11:29:53.741354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.177 [2024-11-20 11:29:53.741360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.177 [2024-11-20 11:29:53.753121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.177 [2024-11-20 11:29:53.753137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.177 [2024-11-20 11:29:53.753144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.177 [2024-11-20 11:29:53.764597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.177 [2024-11-20 11:29:53.764614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.177 [2024-11-20 11:29:53.764620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.177 [2024-11-20 11:29:53.772340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.177 [2024-11-20 11:29:53.772357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.177 [2024-11-20 11:29:53.772363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.177 [2024-11-20 11:29:53.781795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.177 [2024-11-20 11:29:53.781816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.177 [2024-11-20 11:29:53.781823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.177 [2024-11-20 11:29:53.790614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.177 [2024-11-20 11:29:53.790631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.177 [2024-11-20 11:29:53.790637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.177 [2024-11-20 11:29:53.800167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.177 [2024-11-20 11:29:53.800184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.177 [2024-11-20 11:29:53.800190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.177 [2024-11-20 11:29:53.809269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.177 [2024-11-20 11:29:53.809287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.177 [2024-11-20 11:29:53.809293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.177 [2024-11-20 11:29:53.821658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.177 [2024-11-20 11:29:53.821674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.177 [2024-11-20 11:29:53.821680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.177 [2024-11-20 11:29:53.831778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.177 [2024-11-20 11:29:53.831796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.177 [2024-11-20 11:29:53.831802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.177 [2024-11-20 11:29:53.843205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.177 [2024-11-20 11:29:53.843221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.177 [2024-11-20 11:29:53.843228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.177 [2024-11-20 11:29:53.851356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.177 [2024-11-20 11:29:53.851372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.177 [2024-11-20 11:29:53.851378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.177 [2024-11-20 11:29:53.861774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.177 [2024-11-20 11:29:53.861791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.177 [2024-11-20 11:29:53.861797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.177 [2024-11-20 11:29:53.870721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.177 [2024-11-20 11:29:53.870738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.177 [2024-11-20 11:29:53.870744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.178 [2024-11-20 11:29:53.879178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.178 [2024-11-20 11:29:53.879195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.178 [2024-11-20 11:29:53.879201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.178 [2024-11-20 11:29:53.888002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.178 [2024-11-20 11:29:53.888019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.178 [2024-11-20 11:29:53.888025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.178 [2024-11-20 11:29:53.897352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.178 [2024-11-20 11:29:53.897369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.178 [2024-11-20 11:29:53.897375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.178 [2024-11-20 11:29:53.905974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.178 [2024-11-20 11:29:53.905991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.178 [2024-11-20 11:29:53.905997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.178 [2024-11-20 11:29:53.913950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.178 [2024-11-20 11:29:53.913967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.178 [2024-11-20 11:29:53.913973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.439 [2024-11-20 11:29:53.923709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.439 [2024-11-20 11:29:53.923726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.439 [2024-11-20 11:29:53.923732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.439 [2024-11-20 11:29:53.933416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.439 [2024-11-20 11:29:53.933433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.439 [2024-11-20 11:29:53.933439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.439 [2024-11-20 11:29:53.942023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.440 [2024-11-20 11:29:53.942041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.440 [2024-11-20 11:29:53.942050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.440 [2024-11-20 11:29:53.950858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.440 [2024-11-20 11:29:53.950875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.440 [2024-11-20 11:29:53.950882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.440 [2024-11-20 11:29:53.960618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.440 [2024-11-20 11:29:53.960635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.440 [2024-11-20 11:29:53.960642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.440 [2024-11-20 11:29:53.972717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.440 [2024-11-20 11:29:53.972735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.440 [2024-11-20 11:29:53.972741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.440 [2024-11-20 11:29:53.984658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.440 [2024-11-20 11:29:53.984675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.440 [2024-11-20 11:29:53.984681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.440 [2024-11-20 11:29:53.993262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.440 [2024-11-20 11:29:53.993285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.440 [2024-11-20 11:29:53.993291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.440 [2024-11-20 11:29:54.002107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.440 [2024-11-20 11:29:54.002124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.440 [2024-11-20 11:29:54.002130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.440 [2024-11-20 11:29:54.011466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.440 [2024-11-20 11:29:54.011483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.440 [2024-11-20 11:29:54.011489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.440 [2024-11-20 11:29:54.019341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.440 [2024-11-20 11:29:54.019358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.440 [2024-11-20 11:29:54.019364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.440 [2024-11-20 11:29:54.029033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.440 [2024-11-20 11:29:54.029055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.440 [2024-11-20 11:29:54.029061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.440 [2024-11-20 11:29:54.040257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.440 [2024-11-20 11:29:54.040275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.440 [2024-11-20 11:29:54.040281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.440 [2024-11-20 11:29:54.049966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.440 [2024-11-20 11:29:54.049983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.440 [2024-11-20 11:29:54.049989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.440 [2024-11-20 11:29:54.058779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.440 [2024-11-20 11:29:54.058796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.440 [2024-11-20 11:29:54.058802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.440 [2024-11-20 11:29:54.068035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.440 [2024-11-20 11:29:54.068051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.440 [2024-11-20 11:29:54.068058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.440 [2024-11-20 11:29:54.076453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.440 [2024-11-20 11:29:54.076470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.440 [2024-11-20 11:29:54.076476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.440 [2024-11-20 11:29:54.085414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.440 [2024-11-20 11:29:54.085432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.440 [2024-11-20 11:29:54.085438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.440 [2024-11-20 11:29:54.094682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb0b5c0) 00:29:01.440 [2024-11-20 11:29:54.094698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.440 [2024-11-20 11:29:54.094705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.440 27540.50 IOPS, 107.58 MiB/s 00:29:01.440 Latency(us) 00:29:01.440 [2024-11-20T10:29:54.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.440 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:01.440 nvme0n1 : 2.00 27549.44 107.62 0.00 0.00 4640.81 2280.11 19442.35 00:29:01.440 [2024-11-20T10:29:54.182Z] =================================================================================================================== 00:29:01.440 [2024-11-20T10:29:54.182Z] Total : 27549.44 107.62 0.00 0.00 4640.81 2280.11 19442.35 00:29:01.440 { 00:29:01.440 "results": [ 00:29:01.440 { 00:29:01.440 "job": "nvme0n1", 00:29:01.440 "core_mask": "0x2", 00:29:01.440 "workload": "randread", 00:29:01.440 "status": "finished", 00:29:01.440 "queue_depth": 128, 00:29:01.440 "io_size": 4096, 00:29:01.440 "runtime": 2.003997, 00:29:01.440 "iops": 27549.44243928509, 00:29:01.440 "mibps": 107.61500952845738, 00:29:01.440 "io_failed": 0, 00:29:01.440 "io_timeout": 0, 00:29:01.440 "avg_latency_us": 4640.813355310426, 00:29:01.440 "min_latency_us": 2280.1066666666666, 00:29:01.440 "max_latency_us": 19442.346666666668 00:29:01.440 } 00:29:01.440 ], 00:29:01.440 "core_count": 1 00:29:01.440 } 00:29:01.440 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:01.440 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:01.440 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:01.440 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:01.440 | .driver_specific 00:29:01.440 | .nvme_error 00:29:01.440 | .status_code 00:29:01.440 | .command_transient_transport_error' 00:29:01.700 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 216 > 0 )) 00:29:01.700 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2909766 00:29:01.700 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2909766 ']' 00:29:01.700 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2909766 00:29:01.701 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:01.701 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:01.701 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2909766 00:29:01.701 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:01.701 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:01.701 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2909766' 00:29:01.701 killing process with pid 2909766 00:29:01.701 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2909766 00:29:01.701 Received shutdown signal, test time was about 2.000000 seconds 00:29:01.701 00:29:01.701 Latency(us) 00:29:01.701 [2024-11-20T10:29:54.443Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.701 [2024-11-20T10:29:54.443Z] =================================================================================================================== 00:29:01.701 [2024-11-20T10:29:54.443Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:01.701 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2909766 00:29:01.961 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:01.961 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:01.961 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:01.961 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:01.961 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:01.961 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2910453 00:29:01.961 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2910453 /var/tmp/bperf.sock 00:29:01.961 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2910453 ']' 00:29:01.961 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:01.961 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:01.961 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:01.961 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:01.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:01.961 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:01.961 11:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:01.961 [2024-11-20 11:29:54.514843] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:29:01.961 [2024-11-20 11:29:54.514900] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2910453 ] 00:29:01.961 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:01.961 Zero copy mechanism will not be used. 00:29:01.961 [2024-11-20 11:29:54.596104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.961 [2024-11-20 11:29:54.625407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:02.580 11:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:02.580 11:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:02.580 11:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:02.580 11:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:02.876 11:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:02.876 11:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.876 11:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:02.876 11:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.876 11:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:02.876 11:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:03.165 nvme0n1 00:29:03.165 11:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:03.165 11:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.165 11:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:03.165 11:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.165 11:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:03.165 11:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:03.426 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:03.426 Zero copy mechanism will not be used. 00:29:03.426 Running I/O for 2 seconds... 00:29:03.426 [2024-11-20 11:29:55.956322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.426 [2024-11-20 11:29:55.956353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.426 [2024-11-20 11:29:55.956362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.426 [2024-11-20 11:29:55.962432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.427 [2024-11-20 11:29:55.962453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.427 [2024-11-20 11:29:55.962462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.427 [2024-11-20 11:29:55.970346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.427 [2024-11-20 11:29:55.970366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.427 [2024-11-20 11:29:55.970373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.427 [2024-11-20 11:29:55.975377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.427 [2024-11-20 11:29:55.975396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.427 [2024-11-20 11:29:55.975403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.427 [2024-11-20 11:29:55.982052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.427 [2024-11-20 11:29:55.982071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.427 [2024-11-20 11:29:55.982077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.427 [2024-11-20 11:29:55.991300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.427 [2024-11-20 11:29:55.991318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.427 [2024-11-20 11:29:55.991325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.427 [2024-11-20 11:29:56.002793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.427 [2024-11-20 11:29:56.002812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.427 [2024-11-20 11:29:56.002819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.427 [2024-11-20 11:29:56.011427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.427 [2024-11-20 11:29:56.011446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.427 [2024-11-20 11:29:56.011453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.427 [2024-11-20 11:29:56.023455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.427 [2024-11-20 11:29:56.023479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.427 [2024-11-20 11:29:56.023486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.427 [2024-11-20 11:29:56.029256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.427 [2024-11-20 11:29:56.029275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.427 [2024-11-20 11:29:56.029282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.427 [2024-11-20 11:29:56.037075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.427 [2024-11-20 11:29:56.037093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.427 [2024-11-20 11:29:56.037100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.427 [2024-11-20 11:29:56.047382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.427 [2024-11-20 11:29:56.047401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.427 [2024-11-20 11:29:56.047408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.427 [2024-11-20 11:29:56.053883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.427 [2024-11-20 11:29:56.053902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.427 [2024-11-20 11:29:56.053908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.427 [2024-11-20 11:29:56.064867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.427 [2024-11-20 11:29:56.064885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.427 [2024-11-20 11:29:56.064892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.427 [2024-11-20 11:29:56.073465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.427 [2024-11-20 11:29:56.073484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.427 [2024-11-20 11:29:56.073491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.427 [2024-11-20 11:29:56.080683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.427 [2024-11-20 11:29:56.080703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.427 [2024-11-20 11:29:56.080709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.427 [2024-11-20 11:29:56.086210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.427 [2024-11-20 11:29:56.086231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.427 [2024-11-20 11:29:56.086238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.427 [2024-11-20 11:29:56.095473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.427 [2024-11-20 11:29:56.095493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.427 [2024-11-20 11:29:56.095500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.427 [2024-11-20 11:29:56.101982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.427 [2024-11-20 11:29:56.102001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.427 [2024-11-20 11:29:56.102007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.427 [2024-11-20 11:29:56.107057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.427 [2024-11-20 11:29:56.107076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.427 [2024-11-20 11:29:56.107082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.427 [2024-11-20 11:29:56.118421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.427 [2024-11-20 11:29:56.118440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.427 [2024-11-20 11:29:56.118447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.427 [2024-11-20 11:29:56.126627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.427 [2024-11-20 11:29:56.126645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.427 [2024-11-20 11:29:56.126652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.427 [2024-11-20 11:29:56.138027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.427 [2024-11-20 11:29:56.138045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.427 [2024-11-20 11:29:56.138051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.427 [2024-11-20 11:29:56.148806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.427 [2024-11-20 11:29:56.148825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.427 [2024-11-20 11:29:56.148831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.427 [2024-11-20 11:29:56.158445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.427 [2024-11-20 11:29:56.158464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.427 [2024-11-20 11:29:56.158471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.689 [2024-11-20 11:29:56.167187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.689 [2024-11-20 11:29:56.167205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.689 [2024-11-20 11:29:56.167215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.689 [2024-11-20 11:29:56.176433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.689 [2024-11-20 11:29:56.176451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.689 [2024-11-20 11:29:56.176457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.689 [2024-11-20 11:29:56.185778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.689 [2024-11-20 11:29:56.185796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.689 [2024-11-20 11:29:56.185802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.689 [2024-11-20 11:29:56.194736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.689 [2024-11-20 11:29:56.194753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.689 [2024-11-20 11:29:56.194760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.689 [2024-11-20 11:29:56.205171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.689 [2024-11-20 11:29:56.205189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.689 [2024-11-20 11:29:56.205195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.689 [2024-11-20 11:29:56.214539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.689 [2024-11-20 11:29:56.214557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.689 [2024-11-20 11:29:56.214563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.689 [2024-11-20 11:29:56.225273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.689 [2024-11-20 11:29:56.225292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.689 [2024-11-20 11:29:56.225299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.689 [2024-11-20 11:29:56.237473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.689 [2024-11-20 11:29:56.237491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.689 [2024-11-20 11:29:56.237498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.689 [2024-11-20 11:29:56.248359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.689 [2024-11-20 11:29:56.248377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.689 [2024-11-20 11:29:56.248384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.689 [2024-11-20 11:29:56.259938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.689 [2024-11-20 11:29:56.259959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.689 [2024-11-20 11:29:56.259965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.689 [2024-11-20 11:29:56.268791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.689 [2024-11-20 11:29:56.268809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.689 [2024-11-20 11:29:56.268816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.689 [2024-11-20 11:29:56.276178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.689 [2024-11-20 11:29:56.276196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.689 [2024-11-20 11:29:56.276202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.689 [2024-11-20 11:29:56.285896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.689 [2024-11-20 11:29:56.285915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.689 [2024-11-20 11:29:56.285921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.689 [2024-11-20 11:29:56.296983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.689 [2024-11-20 11:29:56.297001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.689 [2024-11-20 11:29:56.297008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.689 [2024-11-20 11:29:56.306028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.689 [2024-11-20 11:29:56.306046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.689 [2024-11-20 11:29:56.306052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.689 [2024-11-20 11:29:56.314131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.689 [2024-11-20 11:29:56.314150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.689 [2024-11-20 11:29:56.314157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.689 [2024-11-20 11:29:56.323480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.689 [2024-11-20 11:29:56.323498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.689 [2024-11-20 11:29:56.323504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.689 [2024-11-20 11:29:56.333912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.690 [2024-11-20 11:29:56.333930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.690 [2024-11-20 11:29:56.333936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.690 [2024-11-20 11:29:56.344136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.690 [2024-11-20 11:29:56.344154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.690 [2024-11-20 11:29:56.344166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.690 [2024-11-20 11:29:56.356336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.690 [2024-11-20 11:29:56.356354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.690 [2024-11-20 11:29:56.356360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.690 [2024-11-20 11:29:56.367625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.690 [2024-11-20 11:29:56.367643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.690 [2024-11-20 11:29:56.367649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.690 [2024-11-20 11:29:56.379323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.690 [2024-11-20 11:29:56.379340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.690 [2024-11-20 11:29:56.379346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.690 [2024-11-20 11:29:56.389229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.690 [2024-11-20 11:29:56.389247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.690 [2024-11-20 11:29:56.389253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.690 [2024-11-20 11:29:56.399386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.690 [2024-11-20 11:29:56.399403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.690 [2024-11-20 11:29:56.399409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.690 [2024-11-20 11:29:56.410573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.690 [2024-11-20 11:29:56.410591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.690 [2024-11-20 11:29:56.410597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.690 [2024-11-20 11:29:56.423080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.690 [2024-11-20 11:29:56.423098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.690 [2024-11-20 11:29:56.423104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.952 [2024-11-20 11:29:56.432732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.952 [2024-11-20 11:29:56.432750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.952 [2024-11-20 11:29:56.432761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.952 [2024-11-20 11:29:56.444206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.952 [2024-11-20 11:29:56.444223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.952 [2024-11-20 11:29:56.444229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.952 [2024-11-20 11:29:56.456032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.952 [2024-11-20 11:29:56.456050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.952 [2024-11-20 11:29:56.456056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.952 [2024-11-20 11:29:56.464629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.952 [2024-11-20 11:29:56.464647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.952 [2024-11-20 11:29:56.464653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.952 [2024-11-20 11:29:56.475901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.952 [2024-11-20 11:29:56.475918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.952 [2024-11-20 11:29:56.475924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.952 [2024-11-20 11:29:56.487018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.952 [2024-11-20 11:29:56.487036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.952 [2024-11-20 11:29:56.487042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.952 [2024-11-20 11:29:56.499113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.952 [2024-11-20 11:29:56.499131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.952 [2024-11-20 11:29:56.499138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.952 [2024-11-20 11:29:56.510954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.952 [2024-11-20 11:29:56.510972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.952 [2024-11-20 11:29:56.510978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.952 [2024-11-20 11:29:56.522576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.952 [2024-11-20 11:29:56.522594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.952 [2024-11-20 11:29:56.522600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.952 [2024-11-20 11:29:56.534615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.952 [2024-11-20 11:29:56.534637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.952 [2024-11-20 11:29:56.534643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.952 [2024-11-20 11:29:56.546851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.953 [2024-11-20 11:29:56.546868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.953 [2024-11-20 11:29:56.546875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.953 [2024-11-20 11:29:56.557905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.953 [2024-11-20 11:29:56.557923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.953 [2024-11-20 11:29:56.557929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.953 [2024-11-20 11:29:56.570105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.953 [2024-11-20 11:29:56.570123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.953 [2024-11-20 11:29:56.570129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.953 [2024-11-20 11:29:56.582845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.953 [2024-11-20 11:29:56.582863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.953 [2024-11-20 11:29:56.582870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.953 [2024-11-20 11:29:56.595556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.953 [2024-11-20 11:29:56.595574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.953 [2024-11-20 11:29:56.595580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.953 [2024-11-20 11:29:56.608167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.953 [2024-11-20 11:29:56.608185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.953 [2024-11-20 11:29:56.608192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.953 [2024-11-20 11:29:56.619541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.953 [2024-11-20 11:29:56.619559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.953 [2024-11-20 11:29:56.619565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.953 [2024-11-20 11:29:56.631293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.953 [2024-11-20 11:29:56.631311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.953 [2024-11-20 11:29:56.631318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.953 [2024-11-20 11:29:56.642814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.953 [2024-11-20 11:29:56.642831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.953 [2024-11-20 11:29:56.642838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.953 [2024-11-20 11:29:56.652806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.953 [2024-11-20 11:29:56.652823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.953 [2024-11-20 11:29:56.652830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.953 [2024-11-20 11:29:56.663574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.953 [2024-11-20 11:29:56.663593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.953 [2024-11-20 11:29:56.663599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:03.953 [2024-11-20 11:29:56.673251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.953 [2024-11-20 11:29:56.673269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.953 [2024-11-20 11:29:56.673275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:03.953 [2024-11-20 11:29:56.682326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:03.953 [2024-11-20 11:29:56.682344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.953 [2024-11-20 11:29:56.682350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.214 [2024-11-20 11:29:56.692355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.214 [2024-11-20 11:29:56.692373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.214 [2024-11-20 11:29:56.692379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.214 [2024-11-20 11:29:56.702460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.214 [2024-11-20 11:29:56.702479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.214 [2024-11-20 11:29:56.702485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.214 [2024-11-20 11:29:56.714328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.214 [2024-11-20 11:29:56.714346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.214 [2024-11-20 11:29:56.714352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.214 [2024-11-20 11:29:56.725116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.214 [2024-11-20 11:29:56.725138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.214 [2024-11-20 11:29:56.725147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.214 [2024-11-20 11:29:56.736187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.214 [2024-11-20 11:29:56.736206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.214 [2024-11-20 11:29:56.736212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.214 [2024-11-20 11:29:56.748937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.214 [2024-11-20 11:29:56.748956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.214 [2024-11-20 11:29:56.748962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.214 [2024-11-20 11:29:56.761137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.214 [2024-11-20 11:29:56.761156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.214 [2024-11-20 11:29:56.761168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.214 [2024-11-20 11:29:56.773774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.214 [2024-11-20 11:29:56.773792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.214 [2024-11-20 11:29:56.773798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.214 [2024-11-20 11:29:56.786245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.214 [2024-11-20 11:29:56.786263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.214 [2024-11-20 11:29:56.786269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.214 [2024-11-20 11:29:56.798530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.214 [2024-11-20 11:29:56.798549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.214 [2024-11-20 11:29:56.798555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.214 [2024-11-20 11:29:56.811028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.214 [2024-11-20 11:29:56.811048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.214 [2024-11-20 11:29:56.811054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.214 [2024-11-20 11:29:56.823814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.214 [2024-11-20 11:29:56.823833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.214 [2024-11-20 11:29:56.823840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.214 [2024-11-20 11:29:56.836891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.214 [2024-11-20 11:29:56.836909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.214 [2024-11-20 11:29:56.836915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.214 [2024-11-20 11:29:56.849495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.214 [2024-11-20 11:29:56.849514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.214 [2024-11-20 11:29:56.849520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.214 [2024-11-20 11:29:56.862646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.214 [2024-11-20 11:29:56.862665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.214 [2024-11-20 11:29:56.862671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.214 [2024-11-20 11:29:56.875590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.214 [2024-11-20 11:29:56.875608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.214 [2024-11-20 11:29:56.875614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.214 [2024-11-20 11:29:56.888222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.214 [2024-11-20 11:29:56.888241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.214 [2024-11-20 11:29:56.888247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.214 [2024-11-20 11:29:56.901262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.214 [2024-11-20 11:29:56.901280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.214 [2024-11-20 11:29:56.901287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.214 [2024-11-20 11:29:56.914363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.214 [2024-11-20 11:29:56.914381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.214 [2024-11-20 11:29:56.914388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.214 [2024-11-20 11:29:56.927183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.214 [2024-11-20 11:29:56.927201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.214 [2024-11-20 11:29:56.927207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.214 [2024-11-20 11:29:56.939869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.214 [2024-11-20 11:29:56.939887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.214 [2024-11-20 11:29:56.939897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.214 2944.00 IOPS, 368.00 MiB/s [2024-11-20T10:29:56.956Z] [2024-11-20 11:29:56.950723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.214 [2024-11-20 11:29:56.950742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.214 [2024-11-20 11:29:56.950748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.476 [2024-11-20 11:29:56.957073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.476 [2024-11-20 11:29:56.957091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.476 [2024-11-20 11:29:56.957097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.476 [2024-11-20 11:29:56.967210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.476 [2024-11-20 11:29:56.967228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.476 [2024-11-20 11:29:56.967234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.477 [2024-11-20 11:29:56.976436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.477 [2024-11-20 11:29:56.976455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.477 [2024-11-20 11:29:56.976461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.477 [2024-11-20 11:29:56.986314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.477 [2024-11-20 11:29:56.986332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.477 [2024-11-20 11:29:56.986339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.477 [2024-11-20 11:29:56.997399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.477 [2024-11-20 11:29:56.997417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.477 [2024-11-20 11:29:56.997424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.477 [2024-11-20 11:29:57.007920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.477 [2024-11-20 11:29:57.007938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.477 [2024-11-20 11:29:57.007944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.477 [2024-11-20 11:29:57.018759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.477 [2024-11-20 11:29:57.018778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.477 [2024-11-20 11:29:57.018784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.477 [2024-11-20 11:29:57.031455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.477 [2024-11-20 11:29:57.031475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.477 [2024-11-20 11:29:57.031482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.477 [2024-11-20 11:29:57.040521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.477 [2024-11-20 11:29:57.040539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.477 [2024-11-20 11:29:57.040546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.477 [2024-11-20 11:29:57.051045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.477 [2024-11-20 11:29:57.051063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.477 [2024-11-20 11:29:57.051069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.477 [2024-11-20 11:29:57.062100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.477 [2024-11-20 11:29:57.062119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.477 [2024-11-20 11:29:57.062125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.477 [2024-11-20 11:29:57.073800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.477 [2024-11-20 11:29:57.073818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.477 [2024-11-20 11:29:57.073825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.477 [2024-11-20 11:29:57.086732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.477 [2024-11-20 11:29:57.086751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.477 [2024-11-20 11:29:57.086757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.477 [2024-11-20 11:29:57.097819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.477 [2024-11-20 11:29:57.097838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.477 [2024-11-20 11:29:57.097844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.477 [2024-11-20 11:29:57.108681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.477 [2024-11-20 11:29:57.108699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.477 [2024-11-20 11:29:57.108706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.477 [2024-11-20 11:29:57.118272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.477 [2024-11-20 11:29:57.118290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.477 [2024-11-20 11:29:57.118296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.477 [2024-11-20 11:29:57.130299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.477 [2024-11-20 11:29:57.130317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.477 [2024-11-20 11:29:57.130323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.477 [2024-11-20 11:29:57.141638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.477 [2024-11-20 11:29:57.141656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.477 [2024-11-20 11:29:57.141662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.477 [2024-11-20 11:29:57.150416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.477 [2024-11-20 11:29:57.150434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.477 [2024-11-20 11:29:57.150440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.477 [2024-11-20 11:29:57.157402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.477 [2024-11-20 11:29:57.157420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.477 [2024-11-20 11:29:57.157427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.477 [2024-11-20 11:29:57.167878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.477 [2024-11-20 11:29:57.167897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.477 [2024-11-20 11:29:57.167903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.477 [2024-11-20 11:29:57.179536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.477 [2024-11-20 11:29:57.179555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.477 [2024-11-20 11:29:57.179561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.477 [2024-11-20 11:29:57.190902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.477 [2024-11-20 11:29:57.190921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.477 [2024-11-20 11:29:57.190927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.477 [2024-11-20 11:29:57.202358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.477 [2024-11-20 11:29:57.202377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.477 [2024-11-20 11:29:57.202383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.739 [2024-11-20 11:29:57.215395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.739 [2024-11-20 11:29:57.215414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.739 [2024-11-20 11:29:57.215424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.739 [2024-11-20 11:29:57.227428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.739 [2024-11-20 11:29:57.227446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.739 [2024-11-20 11:29:57.227452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.739 [2024-11-20 11:29:57.239499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.739 [2024-11-20 11:29:57.239518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.739 [2024-11-20 11:29:57.239524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.739 [2024-11-20 11:29:57.252320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.739 [2024-11-20 11:29:57.252338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.739 [2024-11-20 11:29:57.252344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.739 [2024-11-20 11:29:57.263269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.739 [2024-11-20 11:29:57.263287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.739 [2024-11-20 11:29:57.263293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.739 [2024-11-20 11:29:57.273986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.739 [2024-11-20 11:29:57.274004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.739 [2024-11-20 11:29:57.274011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.739 [2024-11-20 11:29:57.284622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.739 [2024-11-20 11:29:57.284640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.739 [2024-11-20 11:29:57.284646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.739 [2024-11-20 11:29:57.295761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.739 [2024-11-20 11:29:57.295779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.739 [2024-11-20 11:29:57.295785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.739 [2024-11-20 11:29:57.301901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.739 [2024-11-20 11:29:57.301920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.739 [2024-11-20 11:29:57.301926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.739 [2024-11-20 11:29:57.312393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.739 [2024-11-20 11:29:57.312418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.739 [2024-11-20 11:29:57.312424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.739 [2024-11-20 11:29:57.323341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.739 [2024-11-20 11:29:57.323359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.739 [2024-11-20 11:29:57.323366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.739 [2024-11-20 11:29:57.334842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.739 [2024-11-20 11:29:57.334860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.739 [2024-11-20 11:29:57.334866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.739 [2024-11-20 11:29:57.341655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.739 [2024-11-20 11:29:57.341672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.739 [2024-11-20 11:29:57.341678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.739 [2024-11-20 11:29:57.350340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.739 [2024-11-20 11:29:57.350358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.739 [2024-11-20 11:29:57.350364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.739 [2024-11-20 11:29:57.361047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.739 [2024-11-20 11:29:57.361063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.739 [2024-11-20 11:29:57.361070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.739 [2024-11-20 11:29:57.368398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.739 [2024-11-20 11:29:57.368415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.739 [2024-11-20 11:29:57.368421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.739 [2024-11-20 11:29:57.379404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.739 [2024-11-20 11:29:57.379422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.739 [2024-11-20 11:29:57.379428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.739 [2024-11-20 11:29:57.391895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.739 [2024-11-20 11:29:57.391914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.739 [2024-11-20 11:29:57.391920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.739 [2024-11-20 11:29:57.404485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.739 [2024-11-20 11:29:57.404503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.739 [2024-11-20 11:29:57.404510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.739 [2024-11-20 11:29:57.415059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.739 [2024-11-20 11:29:57.415077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.739 [2024-11-20 11:29:57.415084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.739 [2024-11-20 11:29:57.423912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.739 [2024-11-20 11:29:57.423930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.739 [2024-11-20 11:29:57.423936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.739 [2024-11-20 11:29:57.433175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.739 [2024-11-20 11:29:57.433194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.739 [2024-11-20 11:29:57.433200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:04.739 [2024-11-20 11:29:57.445693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.739 [2024-11-20 11:29:57.445711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.739 [2024-11-20 11:29:57.445717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.739 [2024-11-20 11:29:57.457885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.739 [2024-11-20 11:29:57.457903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.739 [2024-11-20 11:29:57.457909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:04.739 [2024-11-20 11:29:57.465607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.739 [2024-11-20 11:29:57.465625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.739 [2024-11-20 11:29:57.465631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:04.739 [2024-11-20 11:29:57.475770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:04.740 [2024-11-20 11:29:57.475788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.740 [2024-11-20 11:29:57.475794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.000 [2024-11-20 11:29:57.487270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.000 [2024-11-20 11:29:57.487288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.000 [2024-11-20 11:29:57.487297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.000 [2024-11-20 11:29:57.498448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.000 [2024-11-20 11:29:57.498466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.000 [2024-11-20 11:29:57.498473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.000 [2024-11-20 11:29:57.509177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.000 [2024-11-20 11:29:57.509195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.000 [2024-11-20 11:29:57.509201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.000 [2024-11-20 11:29:57.520131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.000 [2024-11-20 11:29:57.520149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.000 [2024-11-20 11:29:57.520155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.000 [2024-11-20 11:29:57.531924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.000 [2024-11-20 11:29:57.531942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.000 [2024-11-20 11:29:57.531948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.000 [2024-11-20 11:29:57.538134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.000 [2024-11-20 11:29:57.538151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.000 [2024-11-20 11:29:57.538163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.000 [2024-11-20 11:29:57.549488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.000 [2024-11-20 11:29:57.549506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.000 [2024-11-20 11:29:57.549512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.000 [2024-11-20 11:29:57.558612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.000 [2024-11-20 11:29:57.558630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.000 [2024-11-20 11:29:57.558636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.000 [2024-11-20 11:29:57.567951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.000 [2024-11-20 11:29:57.567968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.000 [2024-11-20 11:29:57.567974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.000 [2024-11-20 11:29:57.576855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.000 [2024-11-20 11:29:57.576873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.000 [2024-11-20 11:29:57.576879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.000 [2024-11-20 11:29:57.587189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.000 [2024-11-20 11:29:57.587207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.000 [2024-11-20 11:29:57.587213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.000 [2024-11-20 11:29:57.599373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.000 [2024-11-20 11:29:57.599392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.000 [2024-11-20 11:29:57.599398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.000 [2024-11-20 11:29:57.610061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.000 [2024-11-20 11:29:57.610079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.000 [2024-11-20 11:29:57.610086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.000 [2024-11-20 11:29:57.618660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.000 [2024-11-20 11:29:57.618677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.000 [2024-11-20 11:29:57.618684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.000 [2024-11-20 11:29:57.630738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.000 [2024-11-20 11:29:57.630757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.000 [2024-11-20 11:29:57.630763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.000 [2024-11-20 11:29:57.642227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.000 [2024-11-20 11:29:57.642244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.000 [2024-11-20 11:29:57.642251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.000 [2024-11-20 11:29:57.653772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.000 [2024-11-20 11:29:57.653790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.000 [2024-11-20 11:29:57.653796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.000 [2024-11-20 11:29:57.663692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.000 [2024-11-20 11:29:57.663710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.000 [2024-11-20 11:29:57.663719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.000 [2024-11-20 11:29:57.673186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.000 [2024-11-20 11:29:57.673204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.000 [2024-11-20 11:29:57.673210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.000 [2024-11-20 11:29:57.684730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.000 [2024-11-20 11:29:57.684748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.000 [2024-11-20 11:29:57.684754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.001 [2024-11-20 11:29:57.695866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.001 [2024-11-20 11:29:57.695884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.001 [2024-11-20 11:29:57.695890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.001 [2024-11-20 11:29:57.705733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.001 [2024-11-20 11:29:57.705750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.001 [2024-11-20 11:29:57.705757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.001 [2024-11-20 11:29:57.717325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.001 [2024-11-20 11:29:57.717342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.001 [2024-11-20 11:29:57.717349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.001 [2024-11-20 11:29:57.728570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.001 [2024-11-20 11:29:57.728588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.001 [2024-11-20 11:29:57.728594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.263 [2024-11-20 11:29:57.739937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.263 [2024-11-20 11:29:57.739954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.263 [2024-11-20 11:29:57.739960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.263 [2024-11-20 11:29:57.751322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.263 [2024-11-20 11:29:57.751339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.263 [2024-11-20 11:29:57.751346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.263 [2024-11-20 11:29:57.761359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.263 [2024-11-20 11:29:57.761380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.263 [2024-11-20 11:29:57.761386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.263 [2024-11-20 11:29:57.768555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.263 [2024-11-20 11:29:57.768572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.263 [2024-11-20 11:29:57.768578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.263 [2024-11-20 11:29:57.779512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.263 [2024-11-20 11:29:57.779530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.263 [2024-11-20 11:29:57.779536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.263 [2024-11-20 11:29:57.790366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.263 [2024-11-20 11:29:57.790383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.263 [2024-11-20 11:29:57.790389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.263 [2024-11-20 11:29:57.800781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.263 [2024-11-20 11:29:57.800797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.263 [2024-11-20 11:29:57.800804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.263 [2024-11-20 11:29:57.811542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.263 [2024-11-20 11:29:57.811560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.263 [2024-11-20 11:29:57.811566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.263 [2024-11-20 11:29:57.824051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.263 [2024-11-20 11:29:57.824069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.263 [2024-11-20 11:29:57.824075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.263 [2024-11-20 11:29:57.836984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.263 [2024-11-20 11:29:57.837002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.263 [2024-11-20 11:29:57.837009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.263 [2024-11-20 11:29:57.844239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.263 [2024-11-20 11:29:57.844257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.263 [2024-11-20 11:29:57.844263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.263 [2024-11-20 11:29:57.854910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.263 [2024-11-20 11:29:57.854927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.263 [2024-11-20 11:29:57.854934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.263 [2024-11-20 11:29:57.866121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.263 [2024-11-20 11:29:57.866139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.263 [2024-11-20 11:29:57.866145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.263 [2024-11-20 11:29:57.877977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.263 [2024-11-20 11:29:57.877995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.263 [2024-11-20 11:29:57.878001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.263 [2024-11-20 11:29:57.889552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.263 [2024-11-20 11:29:57.889570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.263 [2024-11-20 11:29:57.889576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.263 [2024-11-20 11:29:57.898202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.263 [2024-11-20 11:29:57.898221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.263 [2024-11-20 11:29:57.898227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.263 [2024-11-20 11:29:57.906277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.263 [2024-11-20 11:29:57.906294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.263 [2024-11-20 11:29:57.906300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:05.263 [2024-11-20 11:29:57.918007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.263 [2024-11-20 11:29:57.918026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.264 [2024-11-20 11:29:57.918032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:05.264 [2024-11-20 11:29:57.930005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.264 [2024-11-20 11:29:57.930024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.264 [2024-11-20 11:29:57.930030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:05.264 [2024-11-20 11:29:57.943174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x812a10) 00:29:05.264 [2024-11-20 11:29:57.943191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:05.264 [2024-11-20 11:29:57.943200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:05.264 2931.50 IOPS, 366.44 MiB/s 00:29:05.264 Latency(us) 00:29:05.264 [2024-11-20T10:29:58.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.264 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:05.264 nvme0n1 : 2.00 2936.85 367.11 0.00 0.00 5444.40 860.16 18240.85 00:29:05.264 [2024-11-20T10:29:58.006Z] =================================================================================================================== 00:29:05.264 [2024-11-20T10:29:58.006Z] Total : 2936.85 367.11 0.00 0.00 5444.40 860.16 18240.85 00:29:05.264 { 00:29:05.264 "results": [ 00:29:05.264 { 00:29:05.264 "job": "nvme0n1", 00:29:05.264 "core_mask": "0x2", 00:29:05.264 "workload": "randread", 00:29:05.264 "status": "finished", 00:29:05.264 "queue_depth": 16, 00:29:05.264 "io_size": 131072, 00:29:05.264 "runtime": 2.001804, 00:29:05.264 "iops": 2936.8509604336887, 00:29:05.264 "mibps": 367.1063700542111, 00:29:05.264 "io_failed": 0, 00:29:05.264 "io_timeout": 0, 00:29:05.264 "avg_latency_us": 5444.401655610364, 00:29:05.264 "min_latency_us": 860.16, 00:29:05.264 "max_latency_us": 18240.853333333333 00:29:05.264 } 00:29:05.264 ], 00:29:05.264 "core_count": 1 00:29:05.264 } 00:29:05.264 11:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:05.264 11:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:05.264 11:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:05.264 | .driver_specific 00:29:05.264 | .nvme_error 00:29:05.264 | .status_code 00:29:05.264 | .command_transient_transport_error' 00:29:05.264 11:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:05.525 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 190 > 0 )) 00:29:05.525 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2910453 00:29:05.525 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2910453 ']' 00:29:05.525 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2910453 00:29:05.525 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:05.525 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:05.525 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2910453 00:29:05.525 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:05.525 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:05.525 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2910453' 00:29:05.525 killing process with pid 2910453 00:29:05.525 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2910453 00:29:05.525 Received shutdown signal, test time was about 2.000000 seconds 00:29:05.525 00:29:05.525 Latency(us) 00:29:05.525 [2024-11-20T10:29:58.267Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.525 [2024-11-20T10:29:58.267Z] =================================================================================================================== 00:29:05.525 [2024-11-20T10:29:58.267Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:05.525 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2910453 00:29:05.786 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:05.786 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:05.786 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:05.786 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:05.786 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:05.786 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2911145 00:29:05.786 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2911145 /var/tmp/bperf.sock 00:29:05.786 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2911145 ']' 00:29:05.786 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:05.786 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:05.786 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:05.787 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:05.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:05.787 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:05.787 11:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:05.787 [2024-11-20 11:29:58.392486] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:29:05.787 [2024-11-20 11:29:58.392545] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2911145 ] 00:29:05.787 [2024-11-20 11:29:58.477470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.787 [2024-11-20 11:29:58.505690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:06.729 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:06.729 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:06.729 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:06.729 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:06.729 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:06.729 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.729 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:06.729 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.729 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:06.729 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:07.301 nvme0n1 00:29:07.301 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:07.301 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.301 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:07.301 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.301 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:07.301 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:07.301 Running I/O for 2 seconds... 00:29:07.301 [2024-11-20 11:29:59.861395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f2d80 00:29:07.301 [2024-11-20 11:29:59.862296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.301 [2024-11-20 11:29:59.862326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:07.301 [2024-11-20 11:29:59.870286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f3e60 00:29:07.301 [2024-11-20 11:29:59.871181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.301 [2024-11-20 11:29:59.871202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:07.301 [2024-11-20 11:29:59.878822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f4f40 00:29:07.302 [2024-11-20 11:29:59.879656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.302 [2024-11-20 11:29:59.879674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:07.302 [2024-11-20 11:29:59.886767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e8d30 00:29:07.302 [2024-11-20 11:29:59.887639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.302 [2024-11-20 11:29:59.887656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:07.302 [2024-11-20 11:29:59.896334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166df550 00:29:07.302 [2024-11-20 11:29:59.897313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.302 [2024-11-20 11:29:59.897329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:07.302 [2024-11-20 11:29:59.904807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e0630 00:29:07.302 [2024-11-20 11:29:59.905790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.302 [2024-11-20 11:29:59.905806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:07.302 [2024-11-20 11:29:59.913275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166fd208 00:29:07.302 [2024-11-20 11:29:59.914218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.302 [2024-11-20 11:29:59.914235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:07.302 [2024-11-20 11:29:59.921730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e4140 00:29:07.302 [2024-11-20 11:29:59.922710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.302 [2024-11-20 11:29:59.922730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:07.302 [2024-11-20 11:29:59.930245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e3060 00:29:07.302 [2024-11-20 11:29:59.931188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.302 [2024-11-20 11:29:59.931204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:07.302 [2024-11-20 11:29:59.938693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e1f80 00:29:07.302 [2024-11-20 11:29:59.939684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.302 [2024-11-20 11:29:59.939701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:07.302 [2024-11-20 11:29:59.947146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e95a0 00:29:07.302 [2024-11-20 11:29:59.948133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.302 [2024-11-20 11:29:59.948149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:07.302 [2024-11-20 11:29:59.955592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166ed0b0 00:29:07.302 [2024-11-20 11:29:59.956532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.302 [2024-11-20 11:29:59.956549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:07.302 [2024-11-20 11:29:59.964009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166ee190 00:29:07.302 [2024-11-20 11:29:59.964986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.302 [2024-11-20 11:29:59.965002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:07.302 [2024-11-20 11:29:59.972460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166ef270 00:29:07.302 [2024-11-20 11:29:59.973463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.302 [2024-11-20 11:29:59.973482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:07.302 [2024-11-20 11:29:59.980897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f0350 00:29:07.302 [2024-11-20 11:29:59.981896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.302 [2024-11-20 11:29:59.981914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:07.302 [2024-11-20 11:29:59.989354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f1430 00:29:07.302 [2024-11-20 11:29:59.990340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.302 [2024-11-20 11:29:59.990356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:07.302 [2024-11-20 11:29:59.997798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166fb480 00:29:07.302 [2024-11-20 11:29:59.998809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.302 [2024-11-20 11:29:59.998831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:07.302 [2024-11-20 11:30:00.007768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166fc560 00:29:07.302 [2024-11-20 11:30:00.009187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.302 [2024-11-20 11:30:00.009203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:07.302 [2024-11-20 11:30:00.015685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e2c28 00:29:07.302 [2024-11-20 11:30:00.016786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.302 [2024-11-20 11:30:00.016802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.302 [2024-11-20 11:30:00.025243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f57b0 00:29:07.302 [2024-11-20 11:30:00.026795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.302 [2024-11-20 11:30:00.026811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.302 [2024-11-20 11:30:00.032813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e5220 00:29:07.302 [2024-11-20 11:30:00.033703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.302 [2024-11-20 11:30:00.033718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:07.564 [2024-11-20 11:30:00.040530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e6b70 00:29:07.564 [2024-11-20 11:30:00.041510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.564 [2024-11-20 11:30:00.041526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.564 [2024-11-20 11:30:00.049240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e7818 00:29:07.564 [2024-11-20 11:30:00.049898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.564 [2024-11-20 11:30:00.049916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.564 [2024-11-20 11:30:00.058228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166ecc78 00:29:07.564 [2024-11-20 11:30:00.059323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.564 [2024-11-20 11:30:00.059343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.564 [2024-11-20 11:30:00.066661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f3a28 00:29:07.564 [2024-11-20 11:30:00.067735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.564 [2024-11-20 11:30:00.067750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.564 [2024-11-20 11:30:00.075100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e6300 00:29:07.564 [2024-11-20 11:30:00.076178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.564 [2024-11-20 11:30:00.076194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.564 [2024-11-20 11:30:00.083561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166ecc78 00:29:07.564 [2024-11-20 11:30:00.084659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.564 [2024-11-20 11:30:00.084675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.564 [2024-11-20 11:30:00.092032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f3a28 00:29:07.564 [2024-11-20 11:30:00.093110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.564 [2024-11-20 11:30:00.093130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.564 [2024-11-20 11:30:00.100486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e6300 00:29:07.564 [2024-11-20 11:30:00.101582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.565 [2024-11-20 11:30:00.101599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.565 [2024-11-20 11:30:00.108948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166ecc78 00:29:07.565 [2024-11-20 11:30:00.110039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.565 [2024-11-20 11:30:00.110060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:07.565 [2024-11-20 11:30:00.117587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166fe720 00:29:07.565 [2024-11-20 11:30:00.118670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.565 [2024-11-20 11:30:00.118686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:07.565 [2024-11-20 11:30:00.126021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e4578 00:29:07.565 [2024-11-20 11:30:00.127119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.565 [2024-11-20 11:30:00.127135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:07.565 [2024-11-20 11:30:00.134469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166ff3c8 00:29:07.565 [2024-11-20 11:30:00.135536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.565 [2024-11-20 11:30:00.135551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:07.565 [2024-11-20 11:30:00.142893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166edd58 00:29:07.565 [2024-11-20 11:30:00.143925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.565 [2024-11-20 11:30:00.143943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:07.565 [2024-11-20 11:30:00.151335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166fe2e8 00:29:07.565 [2024-11-20 11:30:00.152380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.565 [2024-11-20 11:30:00.152397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:07.565 [2024-11-20 11:30:00.159775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f4b08 00:29:07.565 [2024-11-20 11:30:00.160839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.565 [2024-11-20 11:30:00.160855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:07.565 [2024-11-20 11:30:00.168222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f7da8 00:29:07.565 [2024-11-20 11:30:00.169304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.565 [2024-11-20 11:30:00.169321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:07.565 [2024-11-20 11:30:00.176670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166ebb98 00:29:07.565 [2024-11-20 11:30:00.177760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.565 [2024-11-20 11:30:00.177776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:07.565 [2024-11-20 11:30:00.185106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166fcdd0 00:29:07.565 [2024-11-20 11:30:00.186188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.565 [2024-11-20 11:30:00.186204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:07.565 [2024-11-20 11:30:00.193553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f2948 00:29:07.565 [2024-11-20 11:30:00.194610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.565 [2024-11-20 11:30:00.194626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:07.565 [2024-11-20 11:30:00.201995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166fc128 00:29:07.565 [2024-11-20 11:30:00.203072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.565 [2024-11-20 11:30:00.203089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:07.565 [2024-11-20 11:30:00.210447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166ecc78 00:29:07.565 [2024-11-20 11:30:00.211540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.565 [2024-11-20 11:30:00.211559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:07.565 [2024-11-20 11:30:00.218878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e84c0 00:29:07.565 [2024-11-20 11:30:00.219928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.565 [2024-11-20 11:30:00.219944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:07.565 [2024-11-20 11:30:00.227323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f3a28 00:29:07.565 [2024-11-20 11:30:00.228389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.565 [2024-11-20 11:30:00.228408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:07.565 [2024-11-20 11:30:00.235775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f6cc8 00:29:07.565 [2024-11-20 11:30:00.236839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.565 [2024-11-20 11:30:00.236855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:07.565 [2024-11-20 11:30:00.244194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f8e88 00:29:07.565 [2024-11-20 11:30:00.245219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.565 [2024-11-20 11:30:00.245236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:07.565 [2024-11-20 11:30:00.252646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166eea00 00:29:07.565 [2024-11-20 11:30:00.253695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.565 [2024-11-20 11:30:00.253713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:07.565 [2024-11-20 11:30:00.261100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166fe720 00:29:07.565 [2024-11-20 11:30:00.262194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.565 [2024-11-20 11:30:00.262211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:07.565 [2024-11-20 11:30:00.269539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e4578 00:29:07.565 [2024-11-20 11:30:00.270630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.565 [2024-11-20 11:30:00.270648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:07.565 [2024-11-20 11:30:00.277964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166ff3c8 00:29:07.565 [2024-11-20 11:30:00.279057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.565 [2024-11-20 11:30:00.279076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:07.565 [2024-11-20 11:30:00.286398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166edd58 00:29:07.565 [2024-11-20 11:30:00.287464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.565 [2024-11-20 11:30:00.287482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:07.565 [2024-11-20 11:30:00.294828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166fe2e8 00:29:07.565 [2024-11-20 11:30:00.295911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.565 [2024-11-20 11:30:00.295928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:07.827 [2024-11-20 11:30:00.303282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f4b08 00:29:07.827 [2024-11-20 11:30:00.304226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.827 [2024-11-20 11:30:00.304243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:07.827 [2024-11-20 11:30:00.311026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f57b0 00:29:07.827 [2024-11-20 11:30:00.312343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.827 [2024-11-20 11:30:00.312360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.827 [2024-11-20 11:30:00.319767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e8088 00:29:07.827 [2024-11-20 11:30:00.320652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.827 [2024-11-20 11:30:00.320668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.827 [2024-11-20 11:30:00.327839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166ebfd0 00:29:07.827 [2024-11-20 11:30:00.328705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.827 [2024-11-20 11:30:00.328721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:07.827 [2024-11-20 11:30:00.336532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166ee5c8 00:29:07.827 [2024-11-20 11:30:00.337145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.827 [2024-11-20 11:30:00.337166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:07.827 [2024-11-20 11:30:00.346284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f7100 00:29:07.827 [2024-11-20 11:30:00.347600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.827 [2024-11-20 11:30:00.347616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:07.827 [2024-11-20 11:30:00.353774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166fda78 00:29:07.827 [2024-11-20 11:30:00.354503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.827 [2024-11-20 11:30:00.354523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.827 [2024-11-20 11:30:00.361793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f0350 00:29:07.827 [2024-11-20 11:30:00.362544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.827 [2024-11-20 11:30:00.362566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.827 [2024-11-20 11:30:00.371081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f6cc8 00:29:07.827 [2024-11-20 11:30:00.372076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.827 [2024-11-20 11:30:00.372094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.827 [2024-11-20 11:30:00.379456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e5220 00:29:07.827 [2024-11-20 11:30:00.380311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.827 [2024-11-20 11:30:00.380330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.827 [2024-11-20 11:30:00.387911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166feb58 00:29:07.827 [2024-11-20 11:30:00.388897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.827 [2024-11-20 11:30:00.388917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.827 [2024-11-20 11:30:00.396363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166fef90 00:29:07.827 [2024-11-20 11:30:00.397353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.827 [2024-11-20 11:30:00.397374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.827 [2024-11-20 11:30:00.404823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e1f80 00:29:07.827 [2024-11-20 11:30:00.405803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.827 [2024-11-20 11:30:00.405822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.827 [2024-11-20 11:30:00.413270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f6cc8 00:29:07.828 [2024-11-20 11:30:00.414226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.828 [2024-11-20 11:30:00.414244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:07.828 [2024-11-20 11:30:00.421028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166ebfd0 00:29:07.828 [2024-11-20 11:30:00.422115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.828 [2024-11-20 11:30:00.422135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:07.828 [2024-11-20 11:30:00.429355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166ed920 00:29:07.828 [2024-11-20 11:30:00.430071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.828 [2024-11-20 11:30:00.430087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:07.828 [2024-11-20 11:30:00.437815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f1868 00:29:07.828 [2024-11-20 11:30:00.438642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.828 [2024-11-20 11:30:00.438659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:07.828 [2024-11-20 11:30:00.446431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166fb8b8 00:29:07.828 [2024-11-20 11:30:00.447262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.828 [2024-11-20 11:30:00.447279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:07.828 [2024-11-20 11:30:00.455169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f20d8 00:29:07.828 [2024-11-20 11:30:00.456155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.828 [2024-11-20 11:30:00.456176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:07.828 [2024-11-20 11:30:00.463620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e6300 00:29:07.828 [2024-11-20 11:30:00.464590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.828 [2024-11-20 11:30:00.464609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:07.828 [2024-11-20 11:30:00.472194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f5be8 00:29:07.828 [2024-11-20 11:30:00.473166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.828 [2024-11-20 11:30:00.473182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:07.828 [2024-11-20 11:30:00.480633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f2510 00:29:07.828 [2024-11-20 11:30:00.481614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.828 [2024-11-20 11:30:00.481632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:07.828 [2024-11-20 11:30:00.489107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f0350 00:29:07.828 [2024-11-20 11:30:00.490052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.828 [2024-11-20 11:30:00.490069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:07.828 [2024-11-20 11:30:00.497558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e4578 00:29:07.828 [2024-11-20 11:30:00.498532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.828 [2024-11-20 11:30:00.498551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:07.828 [2024-11-20 11:30:00.506025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e4140 00:29:07.828 [2024-11-20 11:30:00.507018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.828 [2024-11-20 11:30:00.507036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:07.828 [2024-11-20 11:30:00.514450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e5658 00:29:07.828 [2024-11-20 11:30:00.515428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.828 [2024-11-20 11:30:00.515445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:07.828 [2024-11-20 11:30:00.522891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f96f8 00:29:07.828 [2024-11-20 11:30:00.523870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.828 [2024-11-20 11:30:00.523886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:07.828 [2024-11-20 11:30:00.531353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166eff18 00:29:07.828 [2024-11-20 11:30:00.532343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.828 [2024-11-20 11:30:00.532362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:07.828 [2024-11-20 11:30:00.539811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166ebfd0 00:29:07.828 [2024-11-20 11:30:00.540751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.828 [2024-11-20 11:30:00.540767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:07.828 [2024-11-20 11:30:00.548274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166eaef0 00:29:07.828 [2024-11-20 11:30:00.549238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.828 [2024-11-20 11:30:00.549255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:07.828 [2024-11-20 11:30:00.556698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166dfdc0 00:29:07.828 [2024-11-20 11:30:00.557669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.828 [2024-11-20 11:30:00.557685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:07.828 [2024-11-20 11:30:00.565121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e9e10 00:29:08.091 [2024-11-20 11:30:00.566094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.091 [2024-11-20 11:30:00.566110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:08.091 [2024-11-20 11:30:00.573576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f3a28 00:29:08.091 [2024-11-20 11:30:00.574586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.092 [2024-11-20 11:30:00.574603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:08.092 [2024-11-20 11:30:00.582047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166fe2e8 00:29:08.092 [2024-11-20 11:30:00.583028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.092 [2024-11-20 11:30:00.583051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:08.092 [2024-11-20 11:30:00.590527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f7970 00:29:08.092 [2024-11-20 11:30:00.591523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.092 [2024-11-20 11:30:00.591544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:08.092 [2024-11-20 11:30:00.599166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166df988 00:29:08.092 [2024-11-20 11:30:00.600154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.092 [2024-11-20 11:30:00.600175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:08.092 [2024-11-20 11:30:00.607622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e6738 00:29:08.092 [2024-11-20 11:30:00.608615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.092 [2024-11-20 11:30:00.608633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:08.092 [2024-11-20 11:30:00.616074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e3498 00:29:08.092 [2024-11-20 11:30:00.617078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.092 [2024-11-20 11:30:00.617096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:08.092 [2024-11-20 11:30:00.624540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166fc998 00:29:08.092 [2024-11-20 11:30:00.625535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.092 [2024-11-20 11:30:00.625553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:08.092 [2024-11-20 11:30:00.632998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f6cc8 00:29:08.092 [2024-11-20 11:30:00.633951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.092 [2024-11-20 11:30:00.633968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:08.092 [2024-11-20 11:30:00.641449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e49b0 00:29:08.092 [2024-11-20 11:30:00.642399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.092 [2024-11-20 11:30:00.642416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:08.092 [2024-11-20 11:30:00.649870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166fe720 00:29:08.092 [2024-11-20 11:30:00.650710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.092 [2024-11-20 11:30:00.650727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:08.092 [2024-11-20 11:30:00.658295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e5220 00:29:08.092 [2024-11-20 11:30:00.659133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.092 [2024-11-20 11:30:00.659150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:08.092 [2024-11-20 11:30:00.666860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f9b30 00:29:08.092 [2024-11-20 11:30:00.667698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.092 [2024-11-20 11:30:00.667715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:08.092 [2024-11-20 11:30:00.675308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f4f40 00:29:08.092 [2024-11-20 11:30:00.676267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.092 [2024-11-20 11:30:00.676283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:08.092 [2024-11-20 11:30:00.683848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e0a68 00:29:08.092 [2024-11-20 11:30:00.684846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.092 [2024-11-20 11:30:00.684864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:08.092 [2024-11-20 11:30:00.692325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166fd640 00:29:08.092 [2024-11-20 11:30:00.693295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.092 [2024-11-20 11:30:00.693312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:08.092 [2024-11-20 11:30:00.700761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f20d8 00:29:08.092 [2024-11-20 11:30:00.701760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.092 [2024-11-20 11:30:00.701778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:08.092 [2024-11-20 11:30:00.709235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f35f0 00:29:08.092 [2024-11-20 11:30:00.710226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.092 [2024-11-20 11:30:00.710243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:08.092 [2024-11-20 11:30:00.717694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166fdeb0 00:29:08.092 [2024-11-20 11:30:00.718672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.092 [2024-11-20 11:30:00.718692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:08.092 [2024-11-20 11:30:00.726131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f7da8 00:29:08.092 [2024-11-20 11:30:00.727113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.092 [2024-11-20 11:30:00.727133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:08.092 [2024-11-20 11:30:00.734599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166dece0 00:29:08.092 [2024-11-20 11:30:00.735576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.092 [2024-11-20 11:30:00.735594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:08.092 [2024-11-20 11:30:00.743029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e6300 00:29:08.092 [2024-11-20 11:30:00.744028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.092 [2024-11-20 11:30:00.744045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:08.092 [2024-11-20 11:30:00.751461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f5be8 00:29:08.092 [2024-11-20 11:30:00.752397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.092 [2024-11-20 11:30:00.752414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:08.093 [2024-11-20 11:30:00.759905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f2510 00:29:08.093 [2024-11-20 11:30:00.760899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.093 [2024-11-20 11:30:00.760917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:08.093 [2024-11-20 11:30:00.768351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f0350 00:29:08.093 [2024-11-20 11:30:00.769319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.093 [2024-11-20 11:30:00.769337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:08.093 [2024-11-20 11:30:00.776795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e4578 00:29:08.093 [2024-11-20 11:30:00.777775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.093 [2024-11-20 11:30:00.777792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:08.093 [2024-11-20 11:30:00.785251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e4140 00:29:08.093 [2024-11-20 11:30:00.786246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.093 [2024-11-20 11:30:00.786263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:08.093 [2024-11-20 11:30:00.793670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e5658 00:29:08.093 [2024-11-20 11:30:00.794662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.093 [2024-11-20 11:30:00.794680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:08.093 [2024-11-20 11:30:00.802125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f96f8 00:29:08.093 [2024-11-20 11:30:00.803100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.093 [2024-11-20 11:30:00.803121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:08.093 [2024-11-20 11:30:00.810579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166eff18 00:29:08.093 [2024-11-20 11:30:00.811420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.093 [2024-11-20 11:30:00.811437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:08.093 [2024-11-20 11:30:00.818667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f31b8 00:29:08.093 [2024-11-20 11:30:00.819439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.093 [2024-11-20 11:30:00.819456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:08.093 [2024-11-20 11:30:00.827091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166ec840 00:29:08.093 [2024-11-20 11:30:00.827899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.093 [2024-11-20 11:30:00.827916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:08.355 [2024-11-20 11:30:00.835926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166edd58 00:29:08.355 [2024-11-20 11:30:00.836672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.355 [2024-11-20 11:30:00.836689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:08.355 [2024-11-20 11:30:00.845404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166fbcf0 00:29:08.355 [2024-11-20 11:30:00.846679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.355 [2024-11-20 11:30:00.846695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:08.355 29985.00 IOPS, 117.13 MiB/s [2024-11-20T10:30:01.097Z] [2024-11-20 11:30:00.853249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f8a50 00:29:08.355 [2024-11-20 11:30:00.854038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.355 [2024-11-20 11:30:00.854056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:08.356 [2024-11-20 11:30:00.860997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e1b48 00:29:08.356 [2024-11-20 11:30:00.861834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.356 [2024-11-20 11:30:00.861851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:08.356 [2024-11-20 11:30:00.869735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f2d80 00:29:08.356 [2024-11-20 11:30:00.870517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.356 [2024-11-20 11:30:00.870535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:08.356 [2024-11-20 11:30:00.878113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166ed4e8 00:29:08.356 [2024-11-20 11:30:00.878965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.356 [2024-11-20 11:30:00.878984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:08.356 [2024-11-20 11:30:00.886568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166df118 00:29:08.356 [2024-11-20 11:30:00.887284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.356 [2024-11-20 11:30:00.887301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:08.356 [2024-11-20 11:30:00.895052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f2d80 00:29:08.356 [2024-11-20 11:30:00.895747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.356 [2024-11-20 11:30:00.895766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:08.356 [2024-11-20 11:30:00.904741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166eaef0 00:29:08.356 [2024-11-20 11:30:00.906075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.356 [2024-11-20 11:30:00.906093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:08.356 [2024-11-20 11:30:00.911913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e7818 00:29:08.356 [2024-11-20 11:30:00.912652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.356 [2024-11-20 11:30:00.912669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:08.356 [2024-11-20 11:30:00.920553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f6020 00:29:08.356 [2024-11-20 11:30:00.921135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.356 [2024-11-20 11:30:00.921156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:08.356 [2024-11-20 11:30:00.929186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e27f0 00:29:08.356 [2024-11-20 11:30:00.930092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.356 [2024-11-20 11:30:00.930109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:08.356 [2024-11-20 11:30:00.937926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e6b70 00:29:08.356 [2024-11-20 11:30:00.938682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.356 [2024-11-20 11:30:00.938700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:08.356 [2024-11-20 11:30:00.946560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e5658 00:29:08.356 [2024-11-20 11:30:00.947620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.356 [2024-11-20 11:30:00.947637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:08.356 [2024-11-20 11:30:00.955001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f81e0 00:29:08.356 [2024-11-20 11:30:00.956065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.356 [2024-11-20 11:30:00.956082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:08.356 [2024-11-20 11:30:00.963453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f7538 00:29:08.356 [2024-11-20 11:30:00.964536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.356 [2024-11-20 11:30:00.964554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:08.356 [2024-11-20 11:30:00.971902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f9f68 00:29:08.356 [2024-11-20 11:30:00.972964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.356 [2024-11-20 11:30:00.972981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:08.356 [2024-11-20 11:30:00.980328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f96f8 00:29:08.356 [2024-11-20 11:30:00.981362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.356 [2024-11-20 11:30:00.981379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:08.356 [2024-11-20 11:30:00.988790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166fbcf0 00:29:08.356 [2024-11-20 11:30:00.989849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.356 [2024-11-20 11:30:00.989866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:08.356 [2024-11-20 11:30:00.997239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e2c28 00:29:08.356 [2024-11-20 11:30:00.998296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.356 [2024-11-20 11:30:00.998316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:08.356 [2024-11-20 11:30:01.005686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e6fa8 00:29:08.356 [2024-11-20 11:30:01.006733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.356 [2024-11-20 11:30:01.006750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:08.356 [2024-11-20 11:30:01.013537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f20d8 00:29:08.356 [2024-11-20 11:30:01.014690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.356 [2024-11-20 11:30:01.014707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:08.356 [2024-11-20 11:30:01.021357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166eee38 00:29:08.356 [2024-11-20 11:30:01.021948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.356 [2024-11-20 11:30:01.021967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:08.356 [2024-11-20 11:30:01.030110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f5378 00:29:08.356 [2024-11-20 11:30:01.030946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.356 [2024-11-20 11:30:01.030963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:08.356 [2024-11-20 11:30:01.037887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166ed920 00:29:08.356 [2024-11-20 11:30:01.038611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.356 [2024-11-20 11:30:01.038629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:08.356 [2024-11-20 11:30:01.046594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e5ec8 00:29:08.356 [2024-11-20 11:30:01.047312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.356 [2024-11-20 11:30:01.047328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:08.356 [2024-11-20 11:30:01.055060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166fa3a0 00:29:08.356 [2024-11-20 11:30:01.055744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.356 [2024-11-20 11:30:01.055761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:08.356 [2024-11-20 11:30:01.064219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166de8a8 00:29:08.356 [2024-11-20 11:30:01.065031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.356 [2024-11-20 11:30:01.065048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.356 [2024-11-20 11:30:01.072575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f6458 00:29:08.356 [2024-11-20 11:30:01.073348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.356 [2024-11-20 11:30:01.073365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.356 [2024-11-20 11:30:01.081054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e38d0 00:29:08.357 [2024-11-20 11:30:01.081857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.357 [2024-11-20 11:30:01.081877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.357 [2024-11-20 11:30:01.089542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166fc560 00:29:08.357 [2024-11-20 11:30:01.090348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.357 [2024-11-20 11:30:01.090365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.618 [2024-11-20 11:30:01.097993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e01f8 00:29:08.618 [2024-11-20 11:30:01.098799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.618 [2024-11-20 11:30:01.098822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.618 [2024-11-20 11:30:01.106462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f0350 00:29:08.618 [2024-11-20 11:30:01.107236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.618 [2024-11-20 11:30:01.107254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.618 [2024-11-20 11:30:01.114909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166de8a8 00:29:08.618 [2024-11-20 11:30:01.115724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.618 [2024-11-20 11:30:01.115743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.618 [2024-11-20 11:30:01.123374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f6458 00:29:08.618 [2024-11-20 11:30:01.124132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.618 [2024-11-20 11:30:01.124152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.618 [2024-11-20 11:30:01.131842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e38d0 00:29:08.618 [2024-11-20 11:30:01.132615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.618 [2024-11-20 11:30:01.132635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.618 [2024-11-20 11:30:01.140306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166fc560 00:29:08.618 [2024-11-20 11:30:01.141113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.618 [2024-11-20 11:30:01.141130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.618 [2024-11-20 11:30:01.148772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e01f8 00:29:08.618 [2024-11-20 11:30:01.149572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.618 [2024-11-20 11:30:01.149591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.618 [2024-11-20 11:30:01.157225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f0350 00:29:08.618 [2024-11-20 11:30:01.158020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.618 [2024-11-20 11:30:01.158038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.618 [2024-11-20 11:30:01.165694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166de8a8 00:29:08.618 [2024-11-20 11:30:01.166492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.618 [2024-11-20 11:30:01.166508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.618 [2024-11-20 11:30:01.174180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f6458 00:29:08.618 [2024-11-20 11:30:01.174976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.618 [2024-11-20 11:30:01.174992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.618 [2024-11-20 11:30:01.182648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e38d0 00:29:08.618 [2024-11-20 11:30:01.183429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.618 [2024-11-20 11:30:01.183446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.618 [2024-11-20 11:30:01.191135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166fc560 00:29:08.618 [2024-11-20 11:30:01.191935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.618 [2024-11-20 11:30:01.191952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.619 [2024-11-20 11:30:01.199605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e01f8 00:29:08.619 [2024-11-20 11:30:01.200410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.619 [2024-11-20 11:30:01.200427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.619 [2024-11-20 11:30:01.208049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f0350 00:29:08.619 [2024-11-20 11:30:01.208854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.619 [2024-11-20 11:30:01.208872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.619 [2024-11-20 11:30:01.216520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166de8a8 00:29:08.619 [2024-11-20 11:30:01.217318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.619 [2024-11-20 11:30:01.217335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.619 [2024-11-20 11:30:01.224983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f6458 00:29:08.619 [2024-11-20 11:30:01.225788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.619 [2024-11-20 11:30:01.225807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.619 [2024-11-20 11:30:01.233460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e38d0 00:29:08.619 [2024-11-20 11:30:01.234223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.619 [2024-11-20 11:30:01.234240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.619 [2024-11-20 11:30:01.241942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166fc560 00:29:08.619 [2024-11-20 11:30:01.242752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.619 [2024-11-20 11:30:01.242777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.619 [2024-11-20 11:30:01.250410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e01f8 00:29:08.619 [2024-11-20 11:30:01.251225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.619 [2024-11-20 11:30:01.251243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.619 [2024-11-20 11:30:01.258871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f0350 00:29:08.619 [2024-11-20 11:30:01.259673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.619 [2024-11-20 11:30:01.259691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.619 [2024-11-20 11:30:01.267354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166de8a8 00:29:08.619 [2024-11-20 11:30:01.268168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.619 [2024-11-20 11:30:01.268188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.619 [2024-11-20 11:30:01.275817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f6458 00:29:08.619 [2024-11-20 11:30:01.276623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.619 [2024-11-20 11:30:01.276642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.619 [2024-11-20 11:30:01.284299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e38d0 00:29:08.619 [2024-11-20 11:30:01.285111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.619 [2024-11-20 11:30:01.285127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.619 [2024-11-20 11:30:01.292769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166fc560 00:29:08.619 [2024-11-20 11:30:01.293584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.619 [2024-11-20 11:30:01.293601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.619 [2024-11-20 11:30:01.301232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e01f8 00:29:08.619 [2024-11-20 11:30:01.302026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.619 [2024-11-20 11:30:01.302042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.619 [2024-11-20 11:30:01.309759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f0350 00:29:08.619 [2024-11-20 11:30:01.310557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.619 [2024-11-20 11:30:01.310574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.619 [2024-11-20 11:30:01.318245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166de8a8 00:29:08.619 [2024-11-20 11:30:01.319053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.619 [2024-11-20 11:30:01.319072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.619 [2024-11-20 11:30:01.326708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f6458 00:29:08.619 [2024-11-20 11:30:01.327516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.619 [2024-11-20 11:30:01.327533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.619 [2024-11-20 11:30:01.335181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e38d0 00:29:08.619 [2024-11-20 11:30:01.335979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.619 [2024-11-20 11:30:01.335999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.619 [2024-11-20 11:30:01.343631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166fc560 00:29:08.619 [2024-11-20 11:30:01.344436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.619 [2024-11-20 11:30:01.344453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.619 [2024-11-20 11:30:01.352100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e01f8 00:29:08.619 [2024-11-20 11:30:01.352879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.619 [2024-11-20 11:30:01.352898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.881 [2024-11-20 11:30:01.360955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166eb760 00:29:08.881 [2024-11-20 11:30:01.361754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.881 [2024-11-20 11:30:01.361772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:08.881 [2024-11-20 11:30:01.369308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e9e10 00:29:08.881 [2024-11-20 11:30:01.370103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.881 [2024-11-20 11:30:01.370122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:08.881 [2024-11-20 11:30:01.377790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166eb760 00:29:08.881 [2024-11-20 11:30:01.378530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.881 [2024-11-20 11:30:01.378547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:08.881 [2024-11-20 11:30:01.386257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e9e10 00:29:08.881 [2024-11-20 11:30:01.387026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.881 [2024-11-20 11:30:01.387047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:08.881 [2024-11-20 11:30:01.394727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166eb760 00:29:08.881 [2024-11-20 11:30:01.395534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.881 [2024-11-20 11:30:01.395551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:08.881 [2024-11-20 11:30:01.403554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e9e10 00:29:08.881 [2024-11-20 11:30:01.404562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.881 [2024-11-20 11:30:01.404579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:08.881 [2024-11-20 11:30:01.411915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e01f8 00:29:08.881 [2024-11-20 11:30:01.412894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.881 [2024-11-20 11:30:01.412911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:08.881 [2024-11-20 11:30:01.420368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e5ec8 00:29:08.881 [2024-11-20 11:30:01.421339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.881 [2024-11-20 11:30:01.421356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:08.881 [2024-11-20 11:30:01.428812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f7da8 00:29:08.881 [2024-11-20 11:30:01.429803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.881 [2024-11-20 11:30:01.429823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:08.881 [2024-11-20 11:30:01.437249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f20d8 00:29:08.881 [2024-11-20 11:30:01.438223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.881 [2024-11-20 11:30:01.438239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:08.881 [2024-11-20 11:30:01.445690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166fc560 00:29:08.881 [2024-11-20 11:30:01.446658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.881 [2024-11-20 11:30:01.446677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:08.881 [2024-11-20 11:30:01.454136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166de8a8 00:29:08.881 [2024-11-20 11:30:01.455079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.881 [2024-11-20 11:30:01.455095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:08.881 [2024-11-20 11:30:01.462595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166fa3a0 00:29:08.881 [2024-11-20 11:30:01.463543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.881 [2024-11-20 11:30:01.463559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:08.881 [2024-11-20 11:30:01.471059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166fdeb0 00:29:08.881 [2024-11-20 11:30:01.472018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.881 [2024-11-20 11:30:01.472034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:08.881 [2024-11-20 11:30:01.479482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f6020 00:29:08.881 [2024-11-20 11:30:01.480464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.881 [2024-11-20 11:30:01.480480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:08.881 [2024-11-20 11:30:01.487195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166ec408 00:29:08.881 [2024-11-20 11:30:01.488532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.881 [2024-11-20 11:30:01.488549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:08.881 [2024-11-20 11:30:01.495366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166feb58 00:29:08.881 [2024-11-20 11:30:01.495859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.881 [2024-11-20 11:30:01.495876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:08.881 [2024-11-20 11:30:01.504403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166ed920 00:29:08.881 [2024-11-20 11:30:01.505290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.881 [2024-11-20 11:30:01.505309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:08.881 [2024-11-20 11:30:01.512855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e1710 00:29:08.881 [2024-11-20 11:30:01.513745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.881 [2024-11-20 11:30:01.513764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:08.881 [2024-11-20 11:30:01.521298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e4140 00:29:08.881 [2024-11-20 11:30:01.522183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.881 [2024-11-20 11:30:01.522202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:08.881 [2024-11-20 11:30:01.529734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166ed920 00:29:08.881 [2024-11-20 11:30:01.530619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.881 [2024-11-20 11:30:01.530635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:08.881 [2024-11-20 11:30:01.538179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e1710 00:29:08.881 [2024-11-20 11:30:01.539074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.881 [2024-11-20 11:30:01.539096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:08.881 [2024-11-20 11:30:01.546656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e4140 00:29:08.881 [2024-11-20 11:30:01.547540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.881 [2024-11-20 11:30:01.547557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:08.881 [2024-11-20 11:30:01.555123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166ed920 00:29:08.881 [2024-11-20 11:30:01.556028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.881 [2024-11-20 11:30:01.556046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:08.881 [2024-11-20 11:30:01.563087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166de038 00:29:08.881 [2024-11-20 11:30:01.563894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.881 [2024-11-20 11:30:01.563910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:08.881 [2024-11-20 11:30:01.571839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166ef270 00:29:08.881 [2024-11-20 11:30:01.572657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.881 [2024-11-20 11:30:01.572675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:08.881 [2024-11-20 11:30:01.580322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166eaab8 00:29:08.881 [2024-11-20 11:30:01.581135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.881 [2024-11-20 11:30:01.581152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:08.881 [2024-11-20 11:30:01.588827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166ebfd0 00:29:08.881 [2024-11-20 11:30:01.589645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.881 [2024-11-20 11:30:01.589665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:08.881 [2024-11-20 11:30:01.596904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f5be8 00:29:08.881 [2024-11-20 11:30:01.597675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.881 [2024-11-20 11:30:01.597693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:08.881 [2024-11-20 11:30:01.606543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166fda78 00:29:08.881 [2024-11-20 11:30:01.607390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.881 [2024-11-20 11:30:01.607408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:08.881 [2024-11-20 11:30:01.614989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f3e60 00:29:08.881 [2024-11-20 11:30:01.615855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.881 [2024-11-20 11:30:01.615874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:09.143 [2024-11-20 11:30:01.623436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166ed920 00:29:09.143 [2024-11-20 11:30:01.624265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.143 [2024-11-20 11:30:01.624282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:09.143 [2024-11-20 11:30:01.631881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e2c28 00:29:09.143 [2024-11-20 11:30:01.632732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.143 [2024-11-20 11:30:01.632749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:09.143 [2024-11-20 11:30:01.641006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e01f8 00:29:09.143 [2024-11-20 11:30:01.642161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.143 [2024-11-20 11:30:01.642177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:09.143 [2024-11-20 11:30:01.647840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f7538 00:29:09.143 [2024-11-20 11:30:01.648472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.143 [2024-11-20 11:30:01.648492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:09.143 [2024-11-20 11:30:01.656522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166eaab8 00:29:09.143 [2024-11-20 11:30:01.657293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.143 [2024-11-20 11:30:01.657310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:09.143 [2024-11-20 11:30:01.665092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e95a0 00:29:09.143 [2024-11-20 11:30:01.665865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.143 [2024-11-20 11:30:01.665881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:09.143 [2024-11-20 11:30:01.673519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e9e10 00:29:09.143 [2024-11-20 11:30:01.674297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.143 [2024-11-20 11:30:01.674315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:09.143 [2024-11-20 11:30:01.681967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f0350 00:29:09.143 [2024-11-20 11:30:01.682749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.143 [2024-11-20 11:30:01.682765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:09.143 [2024-11-20 11:30:01.690617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166ea248 00:29:09.143 [2024-11-20 11:30:01.691382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.143 [2024-11-20 11:30:01.691399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:09.143 [2024-11-20 11:30:01.699066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166edd58 00:29:09.143 [2024-11-20 11:30:01.699853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.143 [2024-11-20 11:30:01.699871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:09.143 [2024-11-20 11:30:01.707505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166fb480 00:29:09.143 [2024-11-20 11:30:01.708278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.143 [2024-11-20 11:30:01.708295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:09.144 [2024-11-20 11:30:01.715931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e8d30 00:29:09.144 [2024-11-20 11:30:01.716711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.144 [2024-11-20 11:30:01.716731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:09.144 [2024-11-20 11:30:01.724674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f4298 00:29:09.144 [2024-11-20 11:30:01.725569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.144 [2024-11-20 11:30:01.725585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:09.144 [2024-11-20 11:30:01.733168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166ef6a8 00:29:09.144 [2024-11-20 11:30:01.734043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.144 [2024-11-20 11:30:01.734059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:09.144 [2024-11-20 11:30:01.741751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e1f80 00:29:09.144 [2024-11-20 11:30:01.742652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.144 [2024-11-20 11:30:01.742670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:09.144 [2024-11-20 11:30:01.750199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166e84c0 00:29:09.144 [2024-11-20 11:30:01.751111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.144 [2024-11-20 11:30:01.751128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:09.144 [2024-11-20 11:30:01.758635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166ebfd0 00:29:09.144 [2024-11-20 11:30:01.759530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.144 [2024-11-20 11:30:01.759551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:09.144 [2024-11-20 11:30:01.767061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166ec840 00:29:09.144 [2024-11-20 11:30:01.767957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.144 [2024-11-20 11:30:01.767976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:09.144 [2024-11-20 11:30:01.775520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166fa3a0 00:29:09.144 [2024-11-20 11:30:01.776415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.144 [2024-11-20 11:30:01.776432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:09.144 [2024-11-20 11:30:01.783977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f6cc8 00:29:09.144 [2024-11-20 11:30:01.784898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.144 [2024-11-20 11:30:01.784915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:09.144 [2024-11-20 11:30:01.792436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f92c0 00:29:09.144 [2024-11-20 11:30:01.793355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.144 [2024-11-20 11:30:01.793372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:09.144 [2024-11-20 11:30:01.800893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f2d80 00:29:09.144 [2024-11-20 11:30:01.801768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.144 [2024-11-20 11:30:01.801785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:09.144 [2024-11-20 11:30:01.809323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166fdeb0 00:29:09.144 [2024-11-20 11:30:01.810233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.144 [2024-11-20 11:30:01.810250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:09.144 [2024-11-20 11:30:01.817773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166de8a8 00:29:09.144 [2024-11-20 11:30:01.818680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.144 [2024-11-20 11:30:01.818697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:09.144 [2024-11-20 11:30:01.826236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166fc560 00:29:09.144 [2024-11-20 11:30:01.827149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.144 [2024-11-20 11:30:01.827169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:09.144 [2024-11-20 11:30:01.834687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f20d8 00:29:09.144 [2024-11-20 11:30:01.835605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.144 [2024-11-20 11:30:01.835623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:09.144 [2024-11-20 11:30:01.843132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f9f68 00:29:09.144 [2024-11-20 11:30:01.844024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.144 [2024-11-20 11:30:01.844041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:09.144 [2024-11-20 11:30:01.851568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5520) with pdu=0x2000166f46d0 00:29:09.144 [2024-11-20 11:30:01.852478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:09.144 [2024-11-20 11:30:01.852496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:09.144 30096.50 IOPS, 117.56 MiB/s 00:29:09.144 Latency(us) 00:29:09.144 [2024-11-20T10:30:01.886Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.144 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:09.144 nvme0n1 : 2.00 30093.84 117.55 0.00 0.00 4248.31 2020.69 11414.19 00:29:09.144 [2024-11-20T10:30:01.886Z] =================================================================================================================== 00:29:09.144 [2024-11-20T10:30:01.886Z] Total : 30093.84 117.55 0.00 0.00 4248.31 2020.69 11414.19 00:29:09.144 { 00:29:09.144 "results": [ 00:29:09.144 { 00:29:09.144 "job": "nvme0n1", 00:29:09.144 "core_mask": "0x2", 00:29:09.144 "workload": "randwrite", 00:29:09.144 "status": "finished", 00:29:09.144 "queue_depth": 128, 00:29:09.144 "io_size": 4096, 00:29:09.144 "runtime": 2.00443, 00:29:09.144 "iops": 30093.84213966065, 00:29:09.144 "mibps": 117.55407085804941, 00:29:09.144 "io_failed": 0, 00:29:09.144 "io_timeout": 0, 00:29:09.144 "avg_latency_us": 4248.314871879887, 00:29:09.144 "min_latency_us": 2020.6933333333334, 00:29:09.144 "max_latency_us": 11414.186666666666 00:29:09.144 } 00:29:09.144 ], 00:29:09.144 "core_count": 1 00:29:09.144 } 00:29:09.144 11:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:09.404 11:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:09.404 11:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:09.404 | .driver_specific 00:29:09.404 | .nvme_error 00:29:09.404 | .status_code 00:29:09.404 | .command_transient_transport_error' 00:29:09.404 11:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:09.404 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 236 > 0 )) 00:29:09.404 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2911145 00:29:09.404 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2911145 ']' 00:29:09.404 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2911145 00:29:09.404 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:09.404 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:09.404 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2911145 00:29:09.665 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:09.665 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:09.665 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2911145' 00:29:09.665 killing process with pid 2911145 00:29:09.665 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2911145 00:29:09.665 Received shutdown signal, test time was about 2.000000 seconds 00:29:09.665 00:29:09.665 Latency(us) 00:29:09.665 [2024-11-20T10:30:02.407Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.665 [2024-11-20T10:30:02.407Z] =================================================================================================================== 00:29:09.665 [2024-11-20T10:30:02.407Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:09.665 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2911145 00:29:09.665 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:09.665 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:09.665 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:09.665 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:09.665 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:09.665 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2912007 00:29:09.665 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2912007 /var/tmp/bperf.sock 00:29:09.665 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2912007 ']' 00:29:09.665 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:09.665 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:09.665 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:09.665 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:09.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:09.665 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:09.665 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:09.665 [2024-11-20 11:30:02.305807] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:29:09.665 [2024-11-20 11:30:02.305872] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2912007 ] 00:29:09.665 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:09.665 Zero copy mechanism will not be used. 00:29:09.665 [2024-11-20 11:30:02.391233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.926 [2024-11-20 11:30:02.420683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:10.496 11:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:10.496 11:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:10.496 11:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:10.496 11:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:10.756 11:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:10.756 11:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.757 11:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:10.757 11:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.757 11:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:10.757 11:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:11.016 nvme0n1 00:29:11.016 11:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:11.017 11:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.017 11:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:11.017 11:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.017 11:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:11.017 11:30:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:11.017 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:11.017 Zero copy mechanism will not be used. 00:29:11.017 Running I/O for 2 seconds... 00:29:11.017 [2024-11-20 11:30:03.660098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.017 [2024-11-20 11:30:03.660296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.017 [2024-11-20 11:30:03.660320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.017 [2024-11-20 11:30:03.669572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.017 [2024-11-20 11:30:03.669632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.017 [2024-11-20 11:30:03.669650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.017 [2024-11-20 11:30:03.678332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.017 [2024-11-20 11:30:03.678573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.017 [2024-11-20 11:30:03.678593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.017 [2024-11-20 11:30:03.685240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.017 [2024-11-20 11:30:03.685302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.017 [2024-11-20 11:30:03.685319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.017 [2024-11-20 11:30:03.694417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.017 [2024-11-20 11:30:03.694500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.017 [2024-11-20 11:30:03.694516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.017 [2024-11-20 11:30:03.701431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.017 [2024-11-20 11:30:03.701670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.017 [2024-11-20 11:30:03.701686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.017 [2024-11-20 11:30:03.707666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.017 [2024-11-20 11:30:03.707713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.017 [2024-11-20 11:30:03.707729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.017 [2024-11-20 11:30:03.711261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.017 [2024-11-20 11:30:03.711325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.017 [2024-11-20 11:30:03.711341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.017 [2024-11-20 11:30:03.714872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.017 [2024-11-20 11:30:03.714942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.017 [2024-11-20 11:30:03.714959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.017 [2024-11-20 11:30:03.718768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.017 [2024-11-20 11:30:03.718824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.017 [2024-11-20 11:30:03.718840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.017 [2024-11-20 11:30:03.722633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.017 [2024-11-20 11:30:03.722703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.017 [2024-11-20 11:30:03.722719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.017 [2024-11-20 11:30:03.726560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.017 [2024-11-20 11:30:03.726623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.017 [2024-11-20 11:30:03.726638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.017 [2024-11-20 11:30:03.730365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.017 [2024-11-20 11:30:03.730414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.017 [2024-11-20 11:30:03.730430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.017 [2024-11-20 11:30:03.733745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.017 [2024-11-20 11:30:03.733801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.017 [2024-11-20 11:30:03.733820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.017 [2024-11-20 11:30:03.738096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.017 [2024-11-20 11:30:03.738156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.017 [2024-11-20 11:30:03.738177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.017 [2024-11-20 11:30:03.741767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.017 [2024-11-20 11:30:03.741821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.017 [2024-11-20 11:30:03.741837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.017 [2024-11-20 11:30:03.745136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.017 [2024-11-20 11:30:03.745186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.017 [2024-11-20 11:30:03.745202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.017 [2024-11-20 11:30:03.749364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.017 [2024-11-20 11:30:03.749427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.017 [2024-11-20 11:30:03.749443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.017 [2024-11-20 11:30:03.753202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.017 [2024-11-20 11:30:03.753267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.017 [2024-11-20 11:30:03.753284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.279 [2024-11-20 11:30:03.760447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.279 [2024-11-20 11:30:03.760697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.279 [2024-11-20 11:30:03.760715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.279 [2024-11-20 11:30:03.765990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.279 [2024-11-20 11:30:03.766039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.279 [2024-11-20 11:30:03.766056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.279 [2024-11-20 11:30:03.769781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.279 [2024-11-20 11:30:03.769841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.279 [2024-11-20 11:30:03.769856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.279 [2024-11-20 11:30:03.776598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.279 [2024-11-20 11:30:03.776836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.279 [2024-11-20 11:30:03.776852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.279 [2024-11-20 11:30:03.783315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.279 [2024-11-20 11:30:03.783434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.279 [2024-11-20 11:30:03.783450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.279 [2024-11-20 11:30:03.791334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.279 [2024-11-20 11:30:03.791551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.279 [2024-11-20 11:30:03.791567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.279 [2024-11-20 11:30:03.801962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.279 [2024-11-20 11:30:03.802227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.279 [2024-11-20 11:30:03.802244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.279 [2024-11-20 11:30:03.809152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.279 [2024-11-20 11:30:03.809431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.279 [2024-11-20 11:30:03.809448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.279 [2024-11-20 11:30:03.818723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.279 [2024-11-20 11:30:03.818961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.279 [2024-11-20 11:30:03.818978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.279 [2024-11-20 11:30:03.828295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.279 [2024-11-20 11:30:03.828554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.279 [2024-11-20 11:30:03.828572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.279 [2024-11-20 11:30:03.839341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.279 [2024-11-20 11:30:03.839563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.279 [2024-11-20 11:30:03.839578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.279 [2024-11-20 11:30:03.849025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.279 [2024-11-20 11:30:03.849300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.279 [2024-11-20 11:30:03.849319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.279 [2024-11-20 11:30:03.859408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.279 [2024-11-20 11:30:03.859700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.279 [2024-11-20 11:30:03.859717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.279 [2024-11-20 11:30:03.869026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.279 [2024-11-20 11:30:03.869249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.279 [2024-11-20 11:30:03.869265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.279 [2024-11-20 11:30:03.879870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.279 [2024-11-20 11:30:03.880081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.279 [2024-11-20 11:30:03.880096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.279 [2024-11-20 11:30:03.891172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.279 [2024-11-20 11:30:03.891459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.279 [2024-11-20 11:30:03.891476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.279 [2024-11-20 11:30:03.901962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.279 [2024-11-20 11:30:03.902225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.279 [2024-11-20 11:30:03.902241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.279 [2024-11-20 11:30:03.912913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.279 [2024-11-20 11:30:03.913224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.279 [2024-11-20 11:30:03.913242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.279 [2024-11-20 11:30:03.924061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.279 [2024-11-20 11:30:03.924116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.279 [2024-11-20 11:30:03.924132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.279 [2024-11-20 11:30:03.930553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.279 [2024-11-20 11:30:03.930727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.279 [2024-11-20 11:30:03.930743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.279 [2024-11-20 11:30:03.938526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.279 [2024-11-20 11:30:03.938590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.279 [2024-11-20 11:30:03.938609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.279 [2024-11-20 11:30:03.942359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.279 [2024-11-20 11:30:03.942437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.279 [2024-11-20 11:30:03.942453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.279 [2024-11-20 11:30:03.945942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.279 [2024-11-20 11:30:03.946019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.279 [2024-11-20 11:30:03.946034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.279 [2024-11-20 11:30:03.949468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.279 [2024-11-20 11:30:03.949519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.279 [2024-11-20 11:30:03.949535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.279 [2024-11-20 11:30:03.955466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.279 [2024-11-20 11:30:03.955526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.279 [2024-11-20 11:30:03.955542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.279 [2024-11-20 11:30:03.963930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.279 [2024-11-20 11:30:03.963985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.279 [2024-11-20 11:30:03.964001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.280 [2024-11-20 11:30:03.971287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.280 [2024-11-20 11:30:03.971337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.280 [2024-11-20 11:30:03.971353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.280 [2024-11-20 11:30:03.975210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.280 [2024-11-20 11:30:03.975295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.280 [2024-11-20 11:30:03.975311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.280 [2024-11-20 11:30:03.981938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.280 [2024-11-20 11:30:03.981994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.280 [2024-11-20 11:30:03.982010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.280 [2024-11-20 11:30:03.985710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.280 [2024-11-20 11:30:03.985772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.280 [2024-11-20 11:30:03.985788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.280 [2024-11-20 11:30:03.989300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.280 [2024-11-20 11:30:03.989357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.280 [2024-11-20 11:30:03.989372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.280 [2024-11-20 11:30:03.994483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.280 [2024-11-20 11:30:03.994539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.280 [2024-11-20 11:30:03.994554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.280 [2024-11-20 11:30:04.002184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.280 [2024-11-20 11:30:04.002479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.280 [2024-11-20 11:30:04.002497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.280 [2024-11-20 11:30:04.009998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.280 [2024-11-20 11:30:04.010064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.280 [2024-11-20 11:30:04.010080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.540 [2024-11-20 11:30:04.020198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.540 [2024-11-20 11:30:04.020491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.540 [2024-11-20 11:30:04.020509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.540 [2024-11-20 11:30:04.030884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.540 [2024-11-20 11:30:04.031188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.540 [2024-11-20 11:30:04.031205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.540 [2024-11-20 11:30:04.041719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.540 [2024-11-20 11:30:04.042012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.540 [2024-11-20 11:30:04.042029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.540 [2024-11-20 11:30:04.052116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.540 [2024-11-20 11:30:04.052392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.540 [2024-11-20 11:30:04.052410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.540 [2024-11-20 11:30:04.062518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.540 [2024-11-20 11:30:04.062761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.540 [2024-11-20 11:30:04.062777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.540 [2024-11-20 11:30:04.073684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.540 [2024-11-20 11:30:04.073905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.540 [2024-11-20 11:30:04.073921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.540 [2024-11-20 11:30:04.083488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.540 [2024-11-20 11:30:04.083562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.540 [2024-11-20 11:30:04.083578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.540 [2024-11-20 11:30:04.088055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.540 [2024-11-20 11:30:04.088143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.540 [2024-11-20 11:30:04.088165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.540 [2024-11-20 11:30:04.091824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.540 [2024-11-20 11:30:04.091889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.540 [2024-11-20 11:30:04.091905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.540 [2024-11-20 11:30:04.095440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.541 [2024-11-20 11:30:04.095747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.541 [2024-11-20 11:30:04.095764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.541 [2024-11-20 11:30:04.101986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.541 [2024-11-20 11:30:04.102035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.541 [2024-11-20 11:30:04.102051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.541 [2024-11-20 11:30:04.105725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.541 [2024-11-20 11:30:04.105798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.541 [2024-11-20 11:30:04.105814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.541 [2024-11-20 11:30:04.112342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.541 [2024-11-20 11:30:04.112405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.541 [2024-11-20 11:30:04.112424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.541 [2024-11-20 11:30:04.120491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.541 [2024-11-20 11:30:04.120797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.541 [2024-11-20 11:30:04.120815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.541 [2024-11-20 11:30:04.129613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.541 [2024-11-20 11:30:04.129875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.541 [2024-11-20 11:30:04.129892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.541 [2024-11-20 11:30:04.137225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.541 [2024-11-20 11:30:04.137310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.541 [2024-11-20 11:30:04.137326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.541 [2024-11-20 11:30:04.142570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.541 [2024-11-20 11:30:04.142757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.541 [2024-11-20 11:30:04.142773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.541 [2024-11-20 11:30:04.150086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.541 [2024-11-20 11:30:04.150397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.541 [2024-11-20 11:30:04.150413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.541 [2024-11-20 11:30:04.155492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.541 [2024-11-20 11:30:04.155548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.541 [2024-11-20 11:30:04.155563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.541 [2024-11-20 11:30:04.162497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.541 [2024-11-20 11:30:04.162561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.541 [2024-11-20 11:30:04.162577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.541 [2024-11-20 11:30:04.171521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.541 [2024-11-20 11:30:04.171571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.541 [2024-11-20 11:30:04.171587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.541 [2024-11-20 11:30:04.177198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.541 [2024-11-20 11:30:04.177280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.541 [2024-11-20 11:30:04.177296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.541 [2024-11-20 11:30:04.183915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.541 [2024-11-20 11:30:04.183975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.541 [2024-11-20 11:30:04.183992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.541 [2024-11-20 11:30:04.188762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.541 [2024-11-20 11:30:04.188821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.541 [2024-11-20 11:30:04.188836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.541 [2024-11-20 11:30:04.196167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.541 [2024-11-20 11:30:04.196473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.541 [2024-11-20 11:30:04.196491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.541 [2024-11-20 11:30:04.205570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.541 [2024-11-20 11:30:04.205867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.541 [2024-11-20 11:30:04.205883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.541 [2024-11-20 11:30:04.216885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.541 [2024-11-20 11:30:04.217209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.541 [2024-11-20 11:30:04.217226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.541 [2024-11-20 11:30:04.228488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.541 [2024-11-20 11:30:04.228772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.541 [2024-11-20 11:30:04.228789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.541 [2024-11-20 11:30:04.240036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.541 [2024-11-20 11:30:04.240289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.541 [2024-11-20 11:30:04.240305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.541 [2024-11-20 11:30:04.251632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.541 [2024-11-20 11:30:04.251897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.541 [2024-11-20 11:30:04.251914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.541 [2024-11-20 11:30:04.263374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.541 [2024-11-20 11:30:04.263586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.542 [2024-11-20 11:30:04.263602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.542 [2024-11-20 11:30:04.274274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.542 [2024-11-20 11:30:04.274525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.542 [2024-11-20 11:30:04.274540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.802 [2024-11-20 11:30:04.286287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.802 [2024-11-20 11:30:04.286605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.802 [2024-11-20 11:30:04.286621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.802 [2024-11-20 11:30:04.295813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.802 [2024-11-20 11:30:04.295871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.802 [2024-11-20 11:30:04.295887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.802 [2024-11-20 11:30:04.304069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.802 [2024-11-20 11:30:04.304141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.802 [2024-11-20 11:30:04.304156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.802 [2024-11-20 11:30:04.310507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.802 [2024-11-20 11:30:04.310565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.802 [2024-11-20 11:30:04.310580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.802 [2024-11-20 11:30:04.319692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.802 [2024-11-20 11:30:04.319988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.802 [2024-11-20 11:30:04.320005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.802 [2024-11-20 11:30:04.330021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.802 [2024-11-20 11:30:04.330099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.802 [2024-11-20 11:30:04.330114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.802 [2024-11-20 11:30:04.340270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.802 [2024-11-20 11:30:04.340322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.802 [2024-11-20 11:30:04.340341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.802 [2024-11-20 11:30:04.349942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.802 [2024-11-20 11:30:04.349998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.802 [2024-11-20 11:30:04.350013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.802 [2024-11-20 11:30:04.357990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.802 [2024-11-20 11:30:04.358349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.802 [2024-11-20 11:30:04.358367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.802 [2024-11-20 11:30:04.368035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.802 [2024-11-20 11:30:04.368092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.802 [2024-11-20 11:30:04.368107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.802 [2024-11-20 11:30:04.374155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.802 [2024-11-20 11:30:04.374224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.803 [2024-11-20 11:30:04.374239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.803 [2024-11-20 11:30:04.382817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.803 [2024-11-20 11:30:04.383112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.803 [2024-11-20 11:30:04.383128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.803 [2024-11-20 11:30:04.393530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.803 [2024-11-20 11:30:04.393599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.803 [2024-11-20 11:30:04.393615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.803 [2024-11-20 11:30:04.403574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.803 [2024-11-20 11:30:04.403623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.803 [2024-11-20 11:30:04.403639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.803 [2024-11-20 11:30:04.413964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.803 [2024-11-20 11:30:04.414264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.803 [2024-11-20 11:30:04.414281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.803 [2024-11-20 11:30:04.425076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.803 [2024-11-20 11:30:04.425354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.803 [2024-11-20 11:30:04.425370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.803 [2024-11-20 11:30:04.437058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.803 [2024-11-20 11:30:04.437282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.803 [2024-11-20 11:30:04.437299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.803 [2024-11-20 11:30:04.448615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.803 [2024-11-20 11:30:04.448837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.803 [2024-11-20 11:30:04.448853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.803 [2024-11-20 11:30:04.460239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.803 [2024-11-20 11:30:04.460499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.803 [2024-11-20 11:30:04.460515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.803 [2024-11-20 11:30:04.471732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.803 [2024-11-20 11:30:04.472039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.803 [2024-11-20 11:30:04.472056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.803 [2024-11-20 11:30:04.483895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.803 [2024-11-20 11:30:04.484184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.803 [2024-11-20 11:30:04.484202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.803 [2024-11-20 11:30:04.495417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.803 [2024-11-20 11:30:04.495688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.803 [2024-11-20 11:30:04.495705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.803 [2024-11-20 11:30:04.505979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.803 [2024-11-20 11:30:04.506243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.803 [2024-11-20 11:30:04.506261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.803 [2024-11-20 11:30:04.516598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.803 [2024-11-20 11:30:04.516809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.803 [2024-11-20 11:30:04.516825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.803 [2024-11-20 11:30:04.527397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.803 [2024-11-20 11:30:04.527643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.803 [2024-11-20 11:30:04.527661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.803 [2024-11-20 11:30:04.538342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:11.803 [2024-11-20 11:30:04.538649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.803 [2024-11-20 11:30:04.538667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.064 [2024-11-20 11:30:04.548973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.064 [2024-11-20 11:30:04.549222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.064 [2024-11-20 11:30:04.549238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.064 [2024-11-20 11:30:04.559902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.064 [2024-11-20 11:30:04.560069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.064 [2024-11-20 11:30:04.560085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.064 [2024-11-20 11:30:04.570956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.064 [2024-11-20 11:30:04.571207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.064 [2024-11-20 11:30:04.571224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.064 [2024-11-20 11:30:04.582487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.064 [2024-11-20 11:30:04.582764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.064 [2024-11-20 11:30:04.582781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.064 [2024-11-20 11:30:04.594465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.064 [2024-11-20 11:30:04.594709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.064 [2024-11-20 11:30:04.594726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.064 [2024-11-20 11:30:04.606793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.064 [2024-11-20 11:30:04.607029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.064 [2024-11-20 11:30:04.607047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.064 [2024-11-20 11:30:04.617574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.064 [2024-11-20 11:30:04.617850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.064 [2024-11-20 11:30:04.617875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.064 [2024-11-20 11:30:04.628796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.064 [2024-11-20 11:30:04.629239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.064 [2024-11-20 11:30:04.629256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.064 [2024-11-20 11:30:04.639222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.064 [2024-11-20 11:30:04.639444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.064 [2024-11-20 11:30:04.639461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.064 [2024-11-20 11:30:04.650562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.064 [2024-11-20 11:30:04.650842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.064 [2024-11-20 11:30:04.650860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.064 3731.00 IOPS, 466.38 MiB/s [2024-11-20T10:30:04.806Z] [2024-11-20 11:30:04.662191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.064 [2024-11-20 11:30:04.662407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.064 [2024-11-20 11:30:04.662424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.064 [2024-11-20 11:30:04.673664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.064 [2024-11-20 11:30:04.674009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.064 [2024-11-20 11:30:04.674027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.064 [2024-11-20 11:30:04.684975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.064 [2024-11-20 11:30:04.685244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.064 [2024-11-20 11:30:04.685261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.064 [2024-11-20 11:30:04.693102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.064 [2024-11-20 11:30:04.693295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.064 [2024-11-20 11:30:04.693312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.064 [2024-11-20 11:30:04.701005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.064 [2024-11-20 11:30:04.701199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.064 [2024-11-20 11:30:04.701215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.064 [2024-11-20 11:30:04.707247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.064 [2024-11-20 11:30:04.707571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.064 [2024-11-20 11:30:04.707589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.064 [2024-11-20 11:30:04.716232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.064 [2024-11-20 11:30:04.716529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.064 [2024-11-20 11:30:04.716547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.064 [2024-11-20 11:30:04.724696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.064 [2024-11-20 11:30:04.725008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.064 [2024-11-20 11:30:04.725026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.064 [2024-11-20 11:30:04.731515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.064 [2024-11-20 11:30:04.731911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.064 [2024-11-20 11:30:04.731929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.064 [2024-11-20 11:30:04.741486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.064 [2024-11-20 11:30:04.741789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.064 [2024-11-20 11:30:04.741806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.064 [2024-11-20 11:30:04.749249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.064 [2024-11-20 11:30:04.749440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.064 [2024-11-20 11:30:04.749457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.064 [2024-11-20 11:30:04.756345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.064 [2024-11-20 11:30:04.756642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.064 [2024-11-20 11:30:04.756659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.064 [2024-11-20 11:30:04.764396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.064 [2024-11-20 11:30:04.764688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.064 [2024-11-20 11:30:04.764705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.064 [2024-11-20 11:30:04.773192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.064 [2024-11-20 11:30:04.773393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.064 [2024-11-20 11:30:04.773413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.064 [2024-11-20 11:30:04.784416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.064 [2024-11-20 11:30:04.784759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.064 [2024-11-20 11:30:04.784777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.064 [2024-11-20 11:30:04.795921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.064 [2024-11-20 11:30:04.796124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.064 [2024-11-20 11:30:04.796140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.327 [2024-11-20 11:30:04.807309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.327 [2024-11-20 11:30:04.807771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.327 [2024-11-20 11:30:04.807790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.327 [2024-11-20 11:30:04.819431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.327 [2024-11-20 11:30:04.819671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.327 [2024-11-20 11:30:04.819689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.327 [2024-11-20 11:30:04.830228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.327 [2024-11-20 11:30:04.830571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.327 [2024-11-20 11:30:04.830588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.327 [2024-11-20 11:30:04.841318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.327 [2024-11-20 11:30:04.841552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.327 [2024-11-20 11:30:04.841570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.327 [2024-11-20 11:30:04.852403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.327 [2024-11-20 11:30:04.852605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.327 [2024-11-20 11:30:04.852621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.327 [2024-11-20 11:30:04.861040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.327 [2024-11-20 11:30:04.861246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.327 [2024-11-20 11:30:04.861263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.327 [2024-11-20 11:30:04.868590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.327 [2024-11-20 11:30:04.868928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.327 [2024-11-20 11:30:04.868946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.327 [2024-11-20 11:30:04.876919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.327 [2024-11-20 11:30:04.876978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.327 [2024-11-20 11:30:04.876993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.327 [2024-11-20 11:30:04.884475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.327 [2024-11-20 11:30:04.884773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.327 [2024-11-20 11:30:04.884791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.327 [2024-11-20 11:30:04.894491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.327 [2024-11-20 11:30:04.894870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.327 [2024-11-20 11:30:04.894888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.327 [2024-11-20 11:30:04.899975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.327 [2024-11-20 11:30:04.900170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.327 [2024-11-20 11:30:04.900187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.327 [2024-11-20 11:30:04.908264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.327 [2024-11-20 11:30:04.908568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.327 [2024-11-20 11:30:04.908586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.327 [2024-11-20 11:30:04.912672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.327 [2024-11-20 11:30:04.912861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.327 [2024-11-20 11:30:04.912878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.327 [2024-11-20 11:30:04.920105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.327 [2024-11-20 11:30:04.920404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.327 [2024-11-20 11:30:04.920423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.327 [2024-11-20 11:30:04.927903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.327 [2024-11-20 11:30:04.928204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.327 [2024-11-20 11:30:04.928221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.327 [2024-11-20 11:30:04.932240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.327 [2024-11-20 11:30:04.932483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.327 [2024-11-20 11:30:04.932500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.327 [2024-11-20 11:30:04.938419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.327 [2024-11-20 11:30:04.938729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.327 [2024-11-20 11:30:04.938747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.327 [2024-11-20 11:30:04.945908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.327 [2024-11-20 11:30:04.946096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.327 [2024-11-20 11:30:04.946113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.327 [2024-11-20 11:30:04.951825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.327 [2024-11-20 11:30:04.952017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.327 [2024-11-20 11:30:04.952033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.327 [2024-11-20 11:30:04.956468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.327 [2024-11-20 11:30:04.956797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.327 [2024-11-20 11:30:04.956814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.328 [2024-11-20 11:30:04.966672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.328 [2024-11-20 11:30:04.966969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.328 [2024-11-20 11:30:04.966987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.328 [2024-11-20 11:30:04.975170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.328 [2024-11-20 11:30:04.975361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.328 [2024-11-20 11:30:04.975378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.328 [2024-11-20 11:30:04.981731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.328 [2024-11-20 11:30:04.981949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.328 [2024-11-20 11:30:04.981966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.328 [2024-11-20 11:30:04.990548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.328 [2024-11-20 11:30:04.990876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.328 [2024-11-20 11:30:04.990897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.328 [2024-11-20 11:30:04.999522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.328 [2024-11-20 11:30:04.999812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.328 [2024-11-20 11:30:04.999831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.328 [2024-11-20 11:30:05.008933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.328 [2024-11-20 11:30:05.009122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.328 [2024-11-20 11:30:05.009138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.328 [2024-11-20 11:30:05.016236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.328 [2024-11-20 11:30:05.016531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.328 [2024-11-20 11:30:05.016549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.328 [2024-11-20 11:30:05.025843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.328 [2024-11-20 11:30:05.026172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.328 [2024-11-20 11:30:05.026190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.328 [2024-11-20 11:30:05.034538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.328 [2024-11-20 11:30:05.034826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.328 [2024-11-20 11:30:05.034844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.328 [2024-11-20 11:30:05.041851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.328 [2024-11-20 11:30:05.041940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.328 [2024-11-20 11:30:05.041956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.328 [2024-11-20 11:30:05.047960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.328 [2024-11-20 11:30:05.048070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.328 [2024-11-20 11:30:05.048087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.328 [2024-11-20 11:30:05.058659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.328 [2024-11-20 11:30:05.058957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.328 [2024-11-20 11:30:05.058975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.589 [2024-11-20 11:30:05.065166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.589 [2024-11-20 11:30:05.065356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.589 [2024-11-20 11:30:05.065376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.589 [2024-11-20 11:30:05.070355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.589 [2024-11-20 11:30:05.070669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.589 [2024-11-20 11:30:05.070686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.589 [2024-11-20 11:30:05.079524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.589 [2024-11-20 11:30:05.079925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.589 [2024-11-20 11:30:05.079943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.589 [2024-11-20 11:30:05.088920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.589 [2024-11-20 11:30:05.089248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.589 [2024-11-20 11:30:05.089266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.589 [2024-11-20 11:30:05.094620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.589 [2024-11-20 11:30:05.094811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.589 [2024-11-20 11:30:05.094828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.589 [2024-11-20 11:30:05.103123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.589 [2024-11-20 11:30:05.103416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.589 [2024-11-20 11:30:05.103434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.589 [2024-11-20 11:30:05.114072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.589 [2024-11-20 11:30:05.114406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.589 [2024-11-20 11:30:05.114424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.589 [2024-11-20 11:30:05.118248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.589 [2024-11-20 11:30:05.118460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.589 [2024-11-20 11:30:05.118476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.589 [2024-11-20 11:30:05.127135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.589 [2024-11-20 11:30:05.127576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.589 [2024-11-20 11:30:05.127594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.589 [2024-11-20 11:30:05.137043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.589 [2024-11-20 11:30:05.137306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.589 [2024-11-20 11:30:05.137323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.589 [2024-11-20 11:30:05.143741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.589 [2024-11-20 11:30:05.144076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.589 [2024-11-20 11:30:05.144093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.589 [2024-11-20 11:30:05.151954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.589 [2024-11-20 11:30:05.152145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.589 [2024-11-20 11:30:05.152167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.590 [2024-11-20 11:30:05.161561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.590 [2024-11-20 11:30:05.161897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.590 [2024-11-20 11:30:05.161914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.590 [2024-11-20 11:30:05.168223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.590 [2024-11-20 11:30:05.168662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.590 [2024-11-20 11:30:05.168681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.590 [2024-11-20 11:30:05.178426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.590 [2024-11-20 11:30:05.178778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.590 [2024-11-20 11:30:05.178796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.590 [2024-11-20 11:30:05.185604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.590 [2024-11-20 11:30:05.185944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.590 [2024-11-20 11:30:05.185962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.590 [2024-11-20 11:30:05.190234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.590 [2024-11-20 11:30:05.190280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.590 [2024-11-20 11:30:05.190295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.590 [2024-11-20 11:30:05.198479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.590 [2024-11-20 11:30:05.198714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.590 [2024-11-20 11:30:05.198734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.590 [2024-11-20 11:30:05.205598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.590 [2024-11-20 11:30:05.205829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.590 [2024-11-20 11:30:05.205846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.590 [2024-11-20 11:30:05.213833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.590 [2024-11-20 11:30:05.214025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.590 [2024-11-20 11:30:05.214042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.590 [2024-11-20 11:30:05.223565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.590 [2024-11-20 11:30:05.223903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.590 [2024-11-20 11:30:05.223921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.590 [2024-11-20 11:30:05.231786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.590 [2024-11-20 11:30:05.232097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.590 [2024-11-20 11:30:05.232115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.590 [2024-11-20 11:30:05.238715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.590 [2024-11-20 11:30:05.238907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.590 [2024-11-20 11:30:05.238923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.590 [2024-11-20 11:30:05.247909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.590 [2024-11-20 11:30:05.248193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.590 [2024-11-20 11:30:05.248212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.590 [2024-11-20 11:30:05.255798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.590 [2024-11-20 11:30:05.256035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.590 [2024-11-20 11:30:05.256053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.590 [2024-11-20 11:30:05.264792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.590 [2024-11-20 11:30:05.265034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.590 [2024-11-20 11:30:05.265053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.590 [2024-11-20 11:30:05.271029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.590 [2024-11-20 11:30:05.271357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.590 [2024-11-20 11:30:05.271378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.590 [2024-11-20 11:30:05.278731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.590 [2024-11-20 11:30:05.279042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.590 [2024-11-20 11:30:05.279060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.590 [2024-11-20 11:30:05.286762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.590 [2024-11-20 11:30:05.286961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.590 [2024-11-20 11:30:05.286977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.590 [2024-11-20 11:30:05.293622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.590 [2024-11-20 11:30:05.293935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.590 [2024-11-20 11:30:05.293953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.590 [2024-11-20 11:30:05.301942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.590 [2024-11-20 11:30:05.302256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.590 [2024-11-20 11:30:05.302274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.590 [2024-11-20 11:30:05.310897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.590 [2024-11-20 11:30:05.311212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.590 [2024-11-20 11:30:05.311230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.590 [2024-11-20 11:30:05.318641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.590 [2024-11-20 11:30:05.318805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.590 [2024-11-20 11:30:05.318822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.590 [2024-11-20 11:30:05.322054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.591 [2024-11-20 11:30:05.322222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.591 [2024-11-20 11:30:05.322239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.851 [2024-11-20 11:30:05.327641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.851 [2024-11-20 11:30:05.327805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.851 [2024-11-20 11:30:05.327822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.851 [2024-11-20 11:30:05.330975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.851 [2024-11-20 11:30:05.331138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.851 [2024-11-20 11:30:05.331155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.851 [2024-11-20 11:30:05.336494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.851 [2024-11-20 11:30:05.336655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.851 [2024-11-20 11:30:05.336672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.851 [2024-11-20 11:30:05.340075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.851 [2024-11-20 11:30:05.340242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.851 [2024-11-20 11:30:05.340259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.851 [2024-11-20 11:30:05.345766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.851 [2024-11-20 11:30:05.345927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.851 [2024-11-20 11:30:05.345944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.851 [2024-11-20 11:30:05.350179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.851 [2024-11-20 11:30:05.350347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.851 [2024-11-20 11:30:05.350364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.851 [2024-11-20 11:30:05.353653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.851 [2024-11-20 11:30:05.353816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.852 [2024-11-20 11:30:05.353832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.852 [2024-11-20 11:30:05.358073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.852 [2024-11-20 11:30:05.358243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.852 [2024-11-20 11:30:05.358260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.852 [2024-11-20 11:30:05.363137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.852 [2024-11-20 11:30:05.363308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.852 [2024-11-20 11:30:05.363325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.852 [2024-11-20 11:30:05.369149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.852 [2024-11-20 11:30:05.369512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.852 [2024-11-20 11:30:05.369532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.852 [2024-11-20 11:30:05.377413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.852 [2024-11-20 11:30:05.377692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.852 [2024-11-20 11:30:05.377710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.852 [2024-11-20 11:30:05.384948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.852 [2024-11-20 11:30:05.385209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.852 [2024-11-20 11:30:05.385228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.852 [2024-11-20 11:30:05.389888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.852 [2024-11-20 11:30:05.390068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.852 [2024-11-20 11:30:05.390084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.852 [2024-11-20 11:30:05.393763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.852 [2024-11-20 11:30:05.393923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.852 [2024-11-20 11:30:05.393940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.852 [2024-11-20 11:30:05.402733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.852 [2024-11-20 11:30:05.403068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.852 [2024-11-20 11:30:05.403086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.852 [2024-11-20 11:30:05.407088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.852 [2024-11-20 11:30:05.407256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.852 [2024-11-20 11:30:05.407273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.852 [2024-11-20 11:30:05.413291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.852 [2024-11-20 11:30:05.413467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.852 [2024-11-20 11:30:05.413484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.852 [2024-11-20 11:30:05.417547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.852 [2024-11-20 11:30:05.417898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.852 [2024-11-20 11:30:05.417915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.852 [2024-11-20 11:30:05.426697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.852 [2024-11-20 11:30:05.427005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.852 [2024-11-20 11:30:05.427026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.852 [2024-11-20 11:30:05.434858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.852 [2024-11-20 11:30:05.435231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.852 [2024-11-20 11:30:05.435249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.852 [2024-11-20 11:30:05.442545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.852 [2024-11-20 11:30:05.442879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.852 [2024-11-20 11:30:05.442897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.852 [2024-11-20 11:30:05.452096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.852 [2024-11-20 11:30:05.452385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.852 [2024-11-20 11:30:05.452404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.852 [2024-11-20 11:30:05.457791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.852 [2024-11-20 11:30:05.457962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.852 [2024-11-20 11:30:05.457979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.852 [2024-11-20 11:30:05.465430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.852 [2024-11-20 11:30:05.465779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.852 [2024-11-20 11:30:05.465797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.852 [2024-11-20 11:30:05.473274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.852 [2024-11-20 11:30:05.473577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.852 [2024-11-20 11:30:05.473595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.852 [2024-11-20 11:30:05.477861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.852 [2024-11-20 11:30:05.478022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.852 [2024-11-20 11:30:05.478039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.852 [2024-11-20 11:30:05.482866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.852 [2024-11-20 11:30:05.483163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.852 [2024-11-20 11:30:05.483181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.852 [2024-11-20 11:30:05.487959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.852 [2024-11-20 11:30:05.488121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.852 [2024-11-20 11:30:05.488138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.852 [2024-11-20 11:30:05.495574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.852 [2024-11-20 11:30:05.495836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.852 [2024-11-20 11:30:05.495853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.853 [2024-11-20 11:30:05.502488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.853 [2024-11-20 11:30:05.502612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.853 [2024-11-20 11:30:05.502627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.853 [2024-11-20 11:30:05.510148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.853 [2024-11-20 11:30:05.510225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.853 [2024-11-20 11:30:05.510241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.853 [2024-11-20 11:30:05.515726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.853 [2024-11-20 11:30:05.515992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.853 [2024-11-20 11:30:05.516009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.853 [2024-11-20 11:30:05.525993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.853 [2024-11-20 11:30:05.526262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.853 [2024-11-20 11:30:05.526279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.853 [2024-11-20 11:30:05.537398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.853 [2024-11-20 11:30:05.537666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.853 [2024-11-20 11:30:05.537683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.853 [2024-11-20 11:30:05.548582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.853 [2024-11-20 11:30:05.548882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.853 [2024-11-20 11:30:05.548899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.853 [2024-11-20 11:30:05.559474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.853 [2024-11-20 11:30:05.559754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.853 [2024-11-20 11:30:05.559774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:12.853 [2024-11-20 11:30:05.568898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.853 [2024-11-20 11:30:05.569097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.853 [2024-11-20 11:30:05.569113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:12.853 [2024-11-20 11:30:05.574650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.853 [2024-11-20 11:30:05.574746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.853 [2024-11-20 11:30:05.574763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:12.853 [2024-11-20 11:30:05.583727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.853 [2024-11-20 11:30:05.583915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.853 [2024-11-20 11:30:05.583931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:12.853 [2024-11-20 11:30:05.587869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:12.853 [2024-11-20 11:30:05.587921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.853 [2024-11-20 11:30:05.587937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.112 [2024-11-20 11:30:05.596180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:13.112 [2024-11-20 11:30:05.596416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.112 [2024-11-20 11:30:05.596432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.112 [2024-11-20 11:30:05.603082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:13.112 [2024-11-20 11:30:05.603134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.113 [2024-11-20 11:30:05.603149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.113 [2024-11-20 11:30:05.609790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:13.113 [2024-11-20 11:30:05.610080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.113 [2024-11-20 11:30:05.610097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.113 [2024-11-20 11:30:05.618346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:13.113 [2024-11-20 11:30:05.618631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.113 [2024-11-20 11:30:05.618648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.113 [2024-11-20 11:30:05.628976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:13.113 [2024-11-20 11:30:05.629202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.113 [2024-11-20 11:30:05.629221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:13.113 [2024-11-20 11:30:05.639067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:13.113 [2024-11-20 11:30:05.639315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.113 [2024-11-20 11:30:05.639331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:13.113 [2024-11-20 11:30:05.649690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:13.113 [2024-11-20 11:30:05.650014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.113 [2024-11-20 11:30:05.650031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:13.113 3869.50 IOPS, 483.69 MiB/s [2024-11-20T10:30:05.855Z] [2024-11-20 11:30:05.660937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17b5860) with pdu=0x2000166ff3c8 00:29:13.113 [2024-11-20 11:30:05.661237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.113 [2024-11-20 11:30:05.661254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:13.113 00:29:13.113 Latency(us) 00:29:13.113 [2024-11-20T10:30:05.855Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.113 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:13.113 nvme0n1 : 2.01 3865.99 483.25 0.00 0.00 4130.95 1501.87 12561.07 00:29:13.113 [2024-11-20T10:30:05.855Z] =================================================================================================================== 00:29:13.113 [2024-11-20T10:30:05.855Z] Total : 3865.99 483.25 0.00 0.00 4130.95 1501.87 12561.07 00:29:13.113 { 00:29:13.113 "results": [ 00:29:13.113 { 00:29:13.113 "job": "nvme0n1", 00:29:13.113 "core_mask": "0x2", 00:29:13.113 "workload": "randwrite", 00:29:13.113 "status": "finished", 00:29:13.113 "queue_depth": 16, 00:29:13.113 "io_size": 131072, 00:29:13.113 "runtime": 2.00699, 00:29:13.113 "iops": 3865.988370644597, 00:29:13.113 "mibps": 483.24854633057464, 00:29:13.113 "io_failed": 0, 00:29:13.113 "io_timeout": 0, 00:29:13.113 "avg_latency_us": 4130.949383511621, 00:29:13.113 "min_latency_us": 1501.8666666666666, 00:29:13.113 "max_latency_us": 12561.066666666668 00:29:13.113 } 00:29:13.113 ], 00:29:13.113 "core_count": 1 00:29:13.113 } 00:29:13.113 11:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:13.113 11:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:13.113 11:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:13.113 | .driver_specific 00:29:13.113 | .nvme_error 00:29:13.113 | .status_code 00:29:13.113 | .command_transient_transport_error' 00:29:13.113 11:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:13.372 11:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 251 > 0 )) 00:29:13.372 11:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2912007 00:29:13.372 11:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2912007 ']' 00:29:13.372 11:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2912007 00:29:13.372 11:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:13.372 11:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:13.372 11:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2912007 00:29:13.373 11:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:13.373 11:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:13.373 11:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2912007' 00:29:13.373 killing process with pid 2912007 00:29:13.373 11:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2912007 00:29:13.373 Received shutdown signal, test time was about 2.000000 seconds 00:29:13.373 00:29:13.373 Latency(us) 00:29:13.373 [2024-11-20T10:30:06.115Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.373 [2024-11-20T10:30:06.115Z] =================================================================================================================== 00:29:13.373 [2024-11-20T10:30:06.115Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:13.373 11:30:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2912007 00:29:13.373 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2909419 00:29:13.373 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2909419 ']' 00:29:13.373 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2909419 00:29:13.373 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:13.373 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:13.373 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2909419 00:29:13.373 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:13.373 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:13.373 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2909419' 00:29:13.373 killing process with pid 2909419 00:29:13.373 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2909419 00:29:13.373 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2909419 00:29:13.632 00:29:13.632 real 0m16.591s 00:29:13.632 user 0m32.993s 00:29:13.632 sys 0m3.528s 00:29:13.632 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:13.632 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:13.632 ************************************ 00:29:13.632 END TEST nvmf_digest_error 00:29:13.632 ************************************ 00:29:13.632 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:13.633 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:13.633 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:13.633 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:29:13.633 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:13.633 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:29:13.633 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:13.633 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:13.633 rmmod nvme_tcp 00:29:13.633 rmmod nvme_fabrics 00:29:13.633 rmmod nvme_keyring 00:29:13.633 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:13.633 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:29:13.633 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:29:13.633 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2909419 ']' 00:29:13.633 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2909419 00:29:13.633 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2909419 ']' 00:29:13.633 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2909419 00:29:13.633 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2909419) - No such process 00:29:13.633 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2909419 is not found' 00:29:13.633 Process with pid 2909419 is not found 00:29:13.633 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:13.633 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:13.633 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:13.633 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:29:13.633 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:29:13.633 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:13.633 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:29:13.633 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:13.633 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:13.633 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.633 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:13.633 11:30:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:16.173 00:29:16.173 real 0m43.064s 00:29:16.173 user 1m8.110s 00:29:16.173 sys 0m13.082s 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:16.173 ************************************ 00:29:16.173 END TEST nvmf_digest 00:29:16.173 ************************************ 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.173 ************************************ 00:29:16.173 START TEST nvmf_bdevperf 00:29:16.173 ************************************ 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:16.173 * Looking for test storage... 00:29:16.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:16.173 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:16.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.174 --rc genhtml_branch_coverage=1 00:29:16.174 --rc genhtml_function_coverage=1 00:29:16.174 --rc genhtml_legend=1 00:29:16.174 --rc geninfo_all_blocks=1 00:29:16.174 --rc geninfo_unexecuted_blocks=1 00:29:16.174 00:29:16.174 ' 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:16.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.174 --rc genhtml_branch_coverage=1 00:29:16.174 --rc genhtml_function_coverage=1 00:29:16.174 --rc genhtml_legend=1 00:29:16.174 --rc geninfo_all_blocks=1 00:29:16.174 --rc geninfo_unexecuted_blocks=1 00:29:16.174 00:29:16.174 ' 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:16.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.174 --rc genhtml_branch_coverage=1 00:29:16.174 --rc genhtml_function_coverage=1 00:29:16.174 --rc genhtml_legend=1 00:29:16.174 --rc geninfo_all_blocks=1 00:29:16.174 --rc geninfo_unexecuted_blocks=1 00:29:16.174 00:29:16.174 ' 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:16.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.174 --rc genhtml_branch_coverage=1 00:29:16.174 --rc genhtml_function_coverage=1 00:29:16.174 --rc genhtml_legend=1 00:29:16.174 --rc geninfo_all_blocks=1 00:29:16.174 --rc geninfo_unexecuted_blocks=1 00:29:16.174 00:29:16.174 ' 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:16.174 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:16.174 11:30:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:24.318 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:24.318 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:24.318 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:24.318 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:24.318 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:24.318 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:24.318 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:24.318 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:29:24.318 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:24.318 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:29:24.318 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:29:24.318 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:29:24.318 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:29:24.318 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:29:24.318 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:24.318 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:24.318 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:24.318 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:24.318 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:24.319 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:24.319 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:24.319 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:24.319 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:24.319 11:30:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:24.319 11:30:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:24.319 11:30:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:24.319 11:30:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:24.319 11:30:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:24.319 11:30:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:24.319 11:30:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:24.319 11:30:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:24.319 11:30:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:24.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:24.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.721 ms 00:29:24.319 00:29:24.319 --- 10.0.0.2 ping statistics --- 00:29:24.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.319 rtt min/avg/max/mdev = 0.721/0.721/0.721/0.000 ms 00:29:24.319 11:30:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:24.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:24.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:29:24.319 00:29:24.319 --- 10.0.0.1 ping statistics --- 00:29:24.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.319 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:29:24.319 11:30:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:24.319 11:30:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:29:24.319 11:30:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:24.319 11:30:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:24.319 11:30:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:24.319 11:30:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:24.319 11:30:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:24.319 11:30:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:24.319 11:30:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:24.319 11:30:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:24.319 11:30:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:24.319 11:30:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:24.319 11:30:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:24.319 11:30:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:24.319 11:30:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2917414 00:29:24.319 11:30:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2917414 00:29:24.319 11:30:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:24.319 11:30:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2917414 ']' 00:29:24.319 11:30:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.319 11:30:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:24.319 11:30:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.319 11:30:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:24.320 11:30:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:24.320 [2024-11-20 11:30:16.351082] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:29:24.320 [2024-11-20 11:30:16.351150] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.320 [2024-11-20 11:30:16.456046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:24.320 [2024-11-20 11:30:16.507918] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:24.320 [2024-11-20 11:30:16.507969] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:24.320 [2024-11-20 11:30:16.507978] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:24.320 [2024-11-20 11:30:16.507990] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:24.320 [2024-11-20 11:30:16.507996] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:24.320 [2024-11-20 11:30:16.509838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:24.320 [2024-11-20 11:30:16.510002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.320 [2024-11-20 11:30:16.510002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:24.581 11:30:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:24.581 11:30:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:29:24.581 11:30:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:24.581 11:30:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:24.581 11:30:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:24.581 11:30:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:24.581 11:30:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:24.581 11:30:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.581 11:30:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:24.581 [2024-11-20 11:30:17.226945] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.581 11:30:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.581 11:30:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:24.581 11:30:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.581 11:30:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:24.581 Malloc0 00:29:24.581 11:30:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.581 11:30:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:24.581 11:30:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.581 11:30:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:24.581 11:30:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.581 11:30:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:24.581 11:30:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.581 11:30:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:24.581 11:30:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.581 11:30:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:24.581 11:30:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.581 11:30:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:24.581 [2024-11-20 11:30:17.302577] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:24.581 11:30:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.581 11:30:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:24.581 11:30:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:24.581 11:30:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:24.581 11:30:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:24.581 11:30:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:24.581 11:30:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:24.581 { 00:29:24.581 "params": { 00:29:24.581 "name": "Nvme$subsystem", 00:29:24.581 "trtype": "$TEST_TRANSPORT", 00:29:24.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:24.581 "adrfam": "ipv4", 00:29:24.581 "trsvcid": "$NVMF_PORT", 00:29:24.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:24.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:24.581 "hdgst": ${hdgst:-false}, 00:29:24.581 "ddgst": ${ddgst:-false} 00:29:24.581 }, 00:29:24.581 "method": "bdev_nvme_attach_controller" 00:29:24.581 } 00:29:24.581 EOF 00:29:24.581 )") 00:29:24.581 11:30:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:24.581 11:30:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:24.843 11:30:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:24.843 11:30:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:24.843 "params": { 00:29:24.843 "name": "Nvme1", 00:29:24.843 "trtype": "tcp", 00:29:24.843 "traddr": "10.0.0.2", 00:29:24.843 "adrfam": "ipv4", 00:29:24.843 "trsvcid": "4420", 00:29:24.843 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:24.843 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:24.843 "hdgst": false, 00:29:24.843 "ddgst": false 00:29:24.843 }, 00:29:24.843 "method": "bdev_nvme_attach_controller" 00:29:24.843 }' 00:29:24.843 [2024-11-20 11:30:17.360540] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:29:24.843 [2024-11-20 11:30:17.360604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2917763 ] 00:29:24.843 [2024-11-20 11:30:17.452377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.843 [2024-11-20 11:30:17.505148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.104 Running I/O for 1 seconds... 00:29:26.049 8496.00 IOPS, 33.19 MiB/s 00:29:26.049 Latency(us) 00:29:26.049 [2024-11-20T10:30:18.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.049 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:26.049 Verification LBA range: start 0x0 length 0x4000 00:29:26.049 Nvme1n1 : 1.01 8537.32 33.35 0.00 0.00 14924.55 3290.45 14636.37 00:29:26.049 [2024-11-20T10:30:18.791Z] =================================================================================================================== 00:29:26.049 [2024-11-20T10:30:18.791Z] Total : 8537.32 33.35 0.00 0.00 14924.55 3290.45 14636.37 00:29:26.312 11:30:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2918013 00:29:26.312 11:30:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:26.312 11:30:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:26.312 11:30:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:26.312 11:30:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:26.312 11:30:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:26.312 11:30:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.312 11:30:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.312 { 00:29:26.312 "params": { 00:29:26.312 "name": "Nvme$subsystem", 00:29:26.312 "trtype": "$TEST_TRANSPORT", 00:29:26.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.312 "adrfam": "ipv4", 00:29:26.312 "trsvcid": "$NVMF_PORT", 00:29:26.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.312 "hdgst": ${hdgst:-false}, 00:29:26.312 "ddgst": ${ddgst:-false} 00:29:26.312 }, 00:29:26.312 "method": "bdev_nvme_attach_controller" 00:29:26.312 } 00:29:26.312 EOF 00:29:26.312 )") 00:29:26.312 11:30:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:26.312 11:30:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:26.312 11:30:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:26.312 11:30:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:26.312 "params": { 00:29:26.312 "name": "Nvme1", 00:29:26.312 "trtype": "tcp", 00:29:26.312 "traddr": "10.0.0.2", 00:29:26.312 "adrfam": "ipv4", 00:29:26.312 "trsvcid": "4420", 00:29:26.312 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:26.312 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:26.312 "hdgst": false, 00:29:26.312 "ddgst": false 00:29:26.312 }, 00:29:26.312 "method": "bdev_nvme_attach_controller" 00:29:26.312 }' 00:29:26.312 [2024-11-20 11:30:18.895222] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:29:26.312 [2024-11-20 11:30:18.895304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2918013 ] 00:29:26.312 [2024-11-20 11:30:18.988141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.312 [2024-11-20 11:30:19.039961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.901 Running I/O for 15 seconds... 00:29:28.784 8863.00 IOPS, 34.62 MiB/s [2024-11-20T10:30:22.103Z] 9969.50 IOPS, 38.94 MiB/s [2024-11-20T10:30:22.103Z] 11:30:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2917414 00:29:29.361 11:30:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:29.361 [2024-11-20 11:30:21.849086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.361 [2024-11-20 11:30:21.849127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.361 [2024-11-20 11:30:21.849147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.361 [2024-11-20 11:30:21.849243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.361 [2024-11-20 11:30:21.849256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.361 [2024-11-20 11:30:21.849264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.361 [2024-11-20 11:30:21.849276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.361 [2024-11-20 11:30:21.849285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.361 [2024-11-20 11:30:21.849297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.361 [2024-11-20 11:30:21.849305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.361 [2024-11-20 11:30:21.849315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.361 [2024-11-20 11:30:21.849325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.361 [2024-11-20 11:30:21.849337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.361 [2024-11-20 11:30:21.849347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.361 [2024-11-20 11:30:21.849359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.361 [2024-11-20 11:30:21.849368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.361 [2024-11-20 11:30:21.849382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.361 [2024-11-20 11:30:21.849394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.361 [2024-11-20 11:30:21.849407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.361 [2024-11-20 11:30:21.849417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.361 [2024-11-20 11:30:21.849433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.361 [2024-11-20 11:30:21.849443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.361 [2024-11-20 11:30:21.849454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.361 [2024-11-20 11:30:21.849464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.361 [2024-11-20 11:30:21.849477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.361 [2024-11-20 11:30:21.849486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.361 [2024-11-20 11:30:21.849496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:73592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.361 [2024-11-20 11:30:21.849504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.361 [2024-11-20 11:30:21.849514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:73600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.362 [2024-11-20 11:30:21.849523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.362 [2024-11-20 11:30:21.849535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.362 [2024-11-20 11:30:21.849543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.362 [2024-11-20 11:30:21.849554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.362 [2024-11-20 11:30:21.849563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.362 [2024-11-20 11:30:21.849573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.362 [2024-11-20 11:30:21.849584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.362 [2024-11-20 11:30:21.849594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.362 [2024-11-20 11:30:21.849603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.362 [2024-11-20 11:30:21.849613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.362 [2024-11-20 11:30:21.849622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.362 [2024-11-20 11:30:21.849632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.362 [2024-11-20 11:30:21.849640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.362 [2024-11-20 11:30:21.849649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.362 [2024-11-20 11:30:21.849658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.362 [2024-11-20 11:30:21.849668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.362 [2024-11-20 11:30:21.849678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.362 [2024-11-20 11:30:21.849687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.362 [2024-11-20 11:30:21.849696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.362 [2024-11-20 11:30:21.849706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.362 [2024-11-20 11:30:21.849713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.362 [2024-11-20 11:30:21.849723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.362 [2024-11-20 11:30:21.849732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.362 [2024-11-20 11:30:21.849742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.362 [2024-11-20 11:30:21.849749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.362 [2024-11-20 11:30:21.849759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.362 [2024-11-20 11:30:21.849767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.362 [2024-11-20 11:30:21.849776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.362 [2024-11-20 11:30:21.849783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.362 [2024-11-20 11:30:21.849794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.362 [2024-11-20 11:30:21.849802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.362 [2024-11-20 11:30:21.849811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.362 [2024-11-20 11:30:21.849819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.362 [2024-11-20 11:30:21.849830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:73736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.362 [2024-11-20 11:30:21.849840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.362 [2024-11-20 11:30:21.849850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.362 [2024-11-20 11:30:21.849859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.362 [2024-11-20 11:30:21.849868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.362 [2024-11-20 11:30:21.849878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.362 [2024-11-20 11:30:21.849888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.362 [2024-11-20 11:30:21.849895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.362 [2024-11-20 11:30:21.849908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.362 [2024-11-20 11:30:21.849916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.362 [2024-11-20 11:30:21.849926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.362 [2024-11-20 11:30:21.849933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.362 [2024-11-20 11:30:21.849944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.362 [2024-11-20 11:30:21.849951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.362 [2024-11-20 11:30:21.849961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.362 [2024-11-20 11:30:21.849968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.362 [2024-11-20 11:30:21.849978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.362 [2024-11-20 11:30:21.849985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.362 [2024-11-20 11:30:21.849997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.362 [2024-11-20 11:30:21.850004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.362 [2024-11-20 11:30:21.850013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.362 [2024-11-20 11:30:21.850021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.362 [2024-11-20 11:30:21.850031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.362 [2024-11-20 11:30:21.850040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.362 [2024-11-20 11:30:21.850049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.362 [2024-11-20 11:30:21.850057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.362 [2024-11-20 11:30:21.850068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:73840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.362 [2024-11-20 11:30:21.850077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.362 [2024-11-20 11:30:21.850086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.362 [2024-11-20 11:30:21.850094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.362 [2024-11-20 11:30:21.850104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.362 [2024-11-20 11:30:21.850111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.362 [2024-11-20 11:30:21.850120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.362 [2024-11-20 11:30:21.850128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.362 [2024-11-20 11:30:21.850141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.362 [2024-11-20 11:30:21.850150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.362 [2024-11-20 11:30:21.850165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.363 [2024-11-20 11:30:21.850172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.363 [2024-11-20 11:30:21.850190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:73080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.363 [2024-11-20 11:30:21.850207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:73088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.363 [2024-11-20 11:30:21.850224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.363 [2024-11-20 11:30:21.850241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:73904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.363 [2024-11-20 11:30:21.850258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:73912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.363 [2024-11-20 11:30:21.850274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.363 [2024-11-20 11:30:21.850292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.363 [2024-11-20 11:30:21.850308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.363 [2024-11-20 11:30:21.850325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.363 [2024-11-20 11:30:21.850341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.363 [2024-11-20 11:30:21.850360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.363 [2024-11-20 11:30:21.850377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.363 [2024-11-20 11:30:21.850394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.363 [2024-11-20 11:30:21.850413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:73984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.363 [2024-11-20 11:30:21.850430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.363 [2024-11-20 11:30:21.850447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.363 [2024-11-20 11:30:21.850466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.363 [2024-11-20 11:30:21.850483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.363 [2024-11-20 11:30:21.850500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:74024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.363 [2024-11-20 11:30:21.850518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.363 [2024-11-20 11:30:21.850536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.363 [2024-11-20 11:30:21.850552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.363 [2024-11-20 11:30:21.850569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:74056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.363 [2024-11-20 11:30:21.850591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.363 [2024-11-20 11:30:21.850607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.363 [2024-11-20 11:30:21.850625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.363 [2024-11-20 11:30:21.850642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.363 [2024-11-20 11:30:21.850658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:73096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.363 [2024-11-20 11:30:21.850675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:73104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.363 [2024-11-20 11:30:21.850692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.363 [2024-11-20 11:30:21.850711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:73120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.363 [2024-11-20 11:30:21.850728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:73128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.363 [2024-11-20 11:30:21.850745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:73136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.363 [2024-11-20 11:30:21.850763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.363 [2024-11-20 11:30:21.850780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.363 [2024-11-20 11:30:21.850798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:73160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.363 [2024-11-20 11:30:21.850817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:73168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.363 [2024-11-20 11:30:21.850833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.363 [2024-11-20 11:30:21.850842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:73176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.363 [2024-11-20 11:30:21.850849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.850859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:73184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.850867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.850877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:73192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.850884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.850893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:73200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.850900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.850910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:73208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.850918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.850927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:73216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.850934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.850944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:73224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.850951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.850960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:29.364 [2024-11-20 11:30:21.850968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.850977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:73232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.850985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.850994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:73240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.851002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.851011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.851020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.851030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:73256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.851039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.851049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:73264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.851056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.851065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:73272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.851074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.851084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:73280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.851091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.851101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:73288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.851108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.851117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.851124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.851134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:73304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.851142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.851151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:73312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.851163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.851173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:73320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.851180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.851190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.851197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.851207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.851215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.851224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:73344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.851231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.851242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:73352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.851249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.851259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:73360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.851267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.851277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:73368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.851284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.851293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:73376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.851301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.851310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:73384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.851318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.851327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.851336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.851347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:73400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.851355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.851367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:73408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.851375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.851387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:73416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.851395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.851406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:73424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.851416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.851425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:73432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.851433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.851442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:73440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.851449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.851459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:73448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.851468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.851478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:73456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.851486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.851496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:73464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.851503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.851512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:73472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.364 [2024-11-20 11:30:21.851519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.364 [2024-11-20 11:30:21.851528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6db390 is same with the state(6) to be set 00:29:29.364 [2024-11-20 11:30:21.851538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:29.364 [2024-11-20 11:30:21.851545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:29.364 [2024-11-20 11:30:21.851552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73480 len:8 PRP1 0x0 PRP2 0x0 00:29:29.364 [2024-11-20 11:30:21.851559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.365 [2024-11-20 11:30:21.851635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:29.365 [2024-11-20 11:30:21.851647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.365 [2024-11-20 11:30:21.851656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:29.365 [2024-11-20 11:30:21.851664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.365 [2024-11-20 11:30:21.851672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:29.365 [2024-11-20 11:30:21.851679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.365 [2024-11-20 11:30:21.851687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:29.365 [2024-11-20 11:30:21.851694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.365 [2024-11-20 11:30:21.851702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.365 [2024-11-20 11:30:21.855231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.365 [2024-11-20 11:30:21.855252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.365 [2024-11-20 11:30:21.856048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.365 [2024-11-20 11:30:21.856066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.365 [2024-11-20 11:30:21.856076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.365 [2024-11-20 11:30:21.856301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.365 [2024-11-20 11:30:21.856527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.365 [2024-11-20 11:30:21.856536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.365 [2024-11-20 11:30:21.856545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.365 [2024-11-20 11:30:21.856554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.365 [2024-11-20 11:30:21.869320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.365 [2024-11-20 11:30:21.869842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.365 [2024-11-20 11:30:21.869861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.365 [2024-11-20 11:30:21.869871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.365 [2024-11-20 11:30:21.870092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.365 [2024-11-20 11:30:21.870321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.365 [2024-11-20 11:30:21.870331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.365 [2024-11-20 11:30:21.870340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.365 [2024-11-20 11:30:21.870347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.365 [2024-11-20 11:30:21.883120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.365 [2024-11-20 11:30:21.883655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.365 [2024-11-20 11:30:21.883673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.365 [2024-11-20 11:30:21.883681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.365 [2024-11-20 11:30:21.883900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.365 [2024-11-20 11:30:21.884120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.365 [2024-11-20 11:30:21.884128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.365 [2024-11-20 11:30:21.884136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.365 [2024-11-20 11:30:21.884142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.365 [2024-11-20 11:30:21.896924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.365 [2024-11-20 11:30:21.897463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.365 [2024-11-20 11:30:21.897481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.365 [2024-11-20 11:30:21.897489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.365 [2024-11-20 11:30:21.897708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.365 [2024-11-20 11:30:21.897928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.365 [2024-11-20 11:30:21.897939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.365 [2024-11-20 11:30:21.897951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.365 [2024-11-20 11:30:21.897959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.365 [2024-11-20 11:30:21.910753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.365 [2024-11-20 11:30:21.911298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.365 [2024-11-20 11:30:21.911318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.365 [2024-11-20 11:30:21.911326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.365 [2024-11-20 11:30:21.911546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.365 [2024-11-20 11:30:21.911766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.365 [2024-11-20 11:30:21.911775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.365 [2024-11-20 11:30:21.911783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.365 [2024-11-20 11:30:21.911790] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.365 [2024-11-20 11:30:21.924603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.365 [2024-11-20 11:30:21.925175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.365 [2024-11-20 11:30:21.925193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.365 [2024-11-20 11:30:21.925201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.365 [2024-11-20 11:30:21.925421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.365 [2024-11-20 11:30:21.925641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.365 [2024-11-20 11:30:21.925651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.365 [2024-11-20 11:30:21.925659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.365 [2024-11-20 11:30:21.925667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.365 [2024-11-20 11:30:21.938465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.365 [2024-11-20 11:30:21.938869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.365 [2024-11-20 11:30:21.938888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.365 [2024-11-20 11:30:21.938896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.365 [2024-11-20 11:30:21.939116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.365 [2024-11-20 11:30:21.939344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.365 [2024-11-20 11:30:21.939354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.365 [2024-11-20 11:30:21.939361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.365 [2024-11-20 11:30:21.939368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.365 [2024-11-20 11:30:21.952361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.365 [2024-11-20 11:30:21.952932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.365 [2024-11-20 11:30:21.952950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.365 [2024-11-20 11:30:21.952958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.365 [2024-11-20 11:30:21.953185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.365 [2024-11-20 11:30:21.953407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.365 [2024-11-20 11:30:21.953418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.365 [2024-11-20 11:30:21.953425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.365 [2024-11-20 11:30:21.953432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.365 [2024-11-20 11:30:21.966217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.365 [2024-11-20 11:30:21.966829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.365 [2024-11-20 11:30:21.966873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.365 [2024-11-20 11:30:21.966884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.365 [2024-11-20 11:30:21.967127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.366 [2024-11-20 11:30:21.967361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.366 [2024-11-20 11:30:21.967372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.366 [2024-11-20 11:30:21.967380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.366 [2024-11-20 11:30:21.967388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.366 [2024-11-20 11:30:21.980267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.366 [2024-11-20 11:30:21.980828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.366 [2024-11-20 11:30:21.980851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.366 [2024-11-20 11:30:21.980859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.366 [2024-11-20 11:30:21.981080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.366 [2024-11-20 11:30:21.981309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.366 [2024-11-20 11:30:21.981320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.366 [2024-11-20 11:30:21.981328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.366 [2024-11-20 11:30:21.981335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.366 [2024-11-20 11:30:21.994129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.366 [2024-11-20 11:30:21.994713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.366 [2024-11-20 11:30:21.994733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.366 [2024-11-20 11:30:21.994746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.366 [2024-11-20 11:30:21.994966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.366 [2024-11-20 11:30:21.995195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.366 [2024-11-20 11:30:21.995206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.366 [2024-11-20 11:30:21.995214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.366 [2024-11-20 11:30:21.995221] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.366 [2024-11-20 11:30:22.008027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.366 [2024-11-20 11:30:22.008682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.366 [2024-11-20 11:30:22.008730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.366 [2024-11-20 11:30:22.008742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.366 [2024-11-20 11:30:22.008986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.366 [2024-11-20 11:30:22.009224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.366 [2024-11-20 11:30:22.009237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.366 [2024-11-20 11:30:22.009245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.366 [2024-11-20 11:30:22.009254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.366 [2024-11-20 11:30:22.021849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.366 [2024-11-20 11:30:22.022449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.366 [2024-11-20 11:30:22.022473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.366 [2024-11-20 11:30:22.022482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.366 [2024-11-20 11:30:22.022704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.366 [2024-11-20 11:30:22.022925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.366 [2024-11-20 11:30:22.022936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.366 [2024-11-20 11:30:22.022944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.366 [2024-11-20 11:30:22.022951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.366 [2024-11-20 11:30:22.035782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.366 [2024-11-20 11:30:22.036368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.366 [2024-11-20 11:30:22.036390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.366 [2024-11-20 11:30:22.036398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.366 [2024-11-20 11:30:22.036619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.366 [2024-11-20 11:30:22.036848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.366 [2024-11-20 11:30:22.036860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.366 [2024-11-20 11:30:22.036868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.366 [2024-11-20 11:30:22.036876] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.366 [2024-11-20 11:30:22.049692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.366 [2024-11-20 11:30:22.050265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.366 [2024-11-20 11:30:22.050289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.366 [2024-11-20 11:30:22.050298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.366 [2024-11-20 11:30:22.050519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.366 [2024-11-20 11:30:22.050741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.366 [2024-11-20 11:30:22.050753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.366 [2024-11-20 11:30:22.050761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.366 [2024-11-20 11:30:22.050769] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.366 [2024-11-20 11:30:22.063591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.366 [2024-11-20 11:30:22.064187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.366 [2024-11-20 11:30:22.064211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.366 [2024-11-20 11:30:22.064220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.366 [2024-11-20 11:30:22.064442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.366 [2024-11-20 11:30:22.064665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.366 [2024-11-20 11:30:22.064675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.366 [2024-11-20 11:30:22.064683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.366 [2024-11-20 11:30:22.064691] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.366 [2024-11-20 11:30:22.077521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.366 [2024-11-20 11:30:22.078194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.366 [2024-11-20 11:30:22.078254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.366 [2024-11-20 11:30:22.078267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.366 [2024-11-20 11:30:22.078520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.366 [2024-11-20 11:30:22.078748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.366 [2024-11-20 11:30:22.078760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.367 [2024-11-20 11:30:22.078776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.367 [2024-11-20 11:30:22.078785] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.367 [2024-11-20 11:30:22.091408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.367 [2024-11-20 11:30:22.092087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.367 [2024-11-20 11:30:22.092153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.367 [2024-11-20 11:30:22.092178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.367 [2024-11-20 11:30:22.092434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.367 [2024-11-20 11:30:22.092663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.367 [2024-11-20 11:30:22.092675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.367 [2024-11-20 11:30:22.092684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.367 [2024-11-20 11:30:22.092694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.630 [2024-11-20 11:30:22.105325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.630 [2024-11-20 11:30:22.105921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.630 [2024-11-20 11:30:22.105952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.630 [2024-11-20 11:30:22.105962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.630 [2024-11-20 11:30:22.106196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.630 [2024-11-20 11:30:22.106423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.630 [2024-11-20 11:30:22.106433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.630 [2024-11-20 11:30:22.106442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.630 [2024-11-20 11:30:22.106451] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.630 [2024-11-20 11:30:22.119297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.630 [2024-11-20 11:30:22.119894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.630 [2024-11-20 11:30:22.119921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.630 [2024-11-20 11:30:22.119930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.630 [2024-11-20 11:30:22.120152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.630 [2024-11-20 11:30:22.120389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.630 [2024-11-20 11:30:22.120401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.630 [2024-11-20 11:30:22.120410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.630 [2024-11-20 11:30:22.120419] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.630 [2024-11-20 11:30:22.133272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.630 [2024-11-20 11:30:22.133887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.630 [2024-11-20 11:30:22.133912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.630 [2024-11-20 11:30:22.133921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.630 [2024-11-20 11:30:22.134143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.630 [2024-11-20 11:30:22.134378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.630 [2024-11-20 11:30:22.134390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.630 [2024-11-20 11:30:22.134398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.630 [2024-11-20 11:30:22.134406] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.630 [2024-11-20 11:30:22.147267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.630 [2024-11-20 11:30:22.147872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.630 [2024-11-20 11:30:22.147899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.630 [2024-11-20 11:30:22.147908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.630 [2024-11-20 11:30:22.148130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.630 [2024-11-20 11:30:22.148362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.630 [2024-11-20 11:30:22.148374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.630 [2024-11-20 11:30:22.148382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.630 [2024-11-20 11:30:22.148390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.630 [2024-11-20 11:30:22.161229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.630 [2024-11-20 11:30:22.161716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.630 [2024-11-20 11:30:22.161742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.630 [2024-11-20 11:30:22.161750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.630 [2024-11-20 11:30:22.161972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.630 [2024-11-20 11:30:22.162204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.630 [2024-11-20 11:30:22.162217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.630 [2024-11-20 11:30:22.162225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.630 [2024-11-20 11:30:22.162233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.630 [2024-11-20 11:30:22.175062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.630 [2024-11-20 11:30:22.175674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.630 [2024-11-20 11:30:22.175700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.630 [2024-11-20 11:30:22.175715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.630 [2024-11-20 11:30:22.175937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.630 [2024-11-20 11:30:22.176170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.631 [2024-11-20 11:30:22.176183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.631 [2024-11-20 11:30:22.176192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.631 [2024-11-20 11:30:22.176202] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.631 [2024-11-20 11:30:22.189030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.631 [2024-11-20 11:30:22.189597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.631 [2024-11-20 11:30:22.189623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.631 [2024-11-20 11:30:22.189632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.631 [2024-11-20 11:30:22.189854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.631 [2024-11-20 11:30:22.190076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.631 [2024-11-20 11:30:22.190090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.631 [2024-11-20 11:30:22.190098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.631 [2024-11-20 11:30:22.190106] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.631 [2024-11-20 11:30:22.202966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.631 [2024-11-20 11:30:22.203420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.631 [2024-11-20 11:30:22.203448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.631 [2024-11-20 11:30:22.203457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.631 [2024-11-20 11:30:22.203680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.631 [2024-11-20 11:30:22.203904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.631 [2024-11-20 11:30:22.203916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.631 [2024-11-20 11:30:22.203924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.631 [2024-11-20 11:30:22.203933] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.631 [2024-11-20 11:30:22.216975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.631 [2024-11-20 11:30:22.217550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.631 [2024-11-20 11:30:22.217576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.631 [2024-11-20 11:30:22.217585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.631 [2024-11-20 11:30:22.217807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.631 [2024-11-20 11:30:22.218032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.631 [2024-11-20 11:30:22.218053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.631 [2024-11-20 11:30:22.218062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.631 [2024-11-20 11:30:22.218071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.631 [2024-11-20 11:30:22.230909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.631 [2024-11-20 11:30:22.231572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.631 [2024-11-20 11:30:22.231638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.631 [2024-11-20 11:30:22.231651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.631 [2024-11-20 11:30:22.231908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.631 [2024-11-20 11:30:22.232137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.631 [2024-11-20 11:30:22.232149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.631 [2024-11-20 11:30:22.232169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.631 [2024-11-20 11:30:22.232180] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.631 [2024-11-20 11:30:22.244839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.631 [2024-11-20 11:30:22.245475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.631 [2024-11-20 11:30:22.245506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.631 [2024-11-20 11:30:22.245515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.631 [2024-11-20 11:30:22.245739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.631 [2024-11-20 11:30:22.245964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.631 [2024-11-20 11:30:22.245976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.631 [2024-11-20 11:30:22.245985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.631 [2024-11-20 11:30:22.245994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.631 [2024-11-20 11:30:22.258836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.631 [2024-11-20 11:30:22.259512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.631 [2024-11-20 11:30:22.259578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.631 [2024-11-20 11:30:22.259591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.631 [2024-11-20 11:30:22.259845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.631 [2024-11-20 11:30:22.260075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.631 [2024-11-20 11:30:22.260089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.631 [2024-11-20 11:30:22.260098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.631 [2024-11-20 11:30:22.260120] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.631 [2024-11-20 11:30:22.272724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.631 [2024-11-20 11:30:22.273476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.631 [2024-11-20 11:30:22.273540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.631 [2024-11-20 11:30:22.273553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.631 [2024-11-20 11:30:22.273809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.631 [2024-11-20 11:30:22.274036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.631 [2024-11-20 11:30:22.274049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.631 [2024-11-20 11:30:22.274058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.631 [2024-11-20 11:30:22.274067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.631 [2024-11-20 11:30:22.286728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.631 [2024-11-20 11:30:22.287452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.631 [2024-11-20 11:30:22.287518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.631 [2024-11-20 11:30:22.287532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.631 [2024-11-20 11:30:22.287788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.631 [2024-11-20 11:30:22.288017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.631 [2024-11-20 11:30:22.288029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.631 [2024-11-20 11:30:22.288038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.631 [2024-11-20 11:30:22.288047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.631 [2024-11-20 11:30:22.300697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.631 [2024-11-20 11:30:22.301310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.631 [2024-11-20 11:30:22.301376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.631 [2024-11-20 11:30:22.301390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.631 [2024-11-20 11:30:22.301647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.631 [2024-11-20 11:30:22.301875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.631 [2024-11-20 11:30:22.301889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.631 [2024-11-20 11:30:22.301898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.631 [2024-11-20 11:30:22.301908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.631 [2024-11-20 11:30:22.314531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.631 [2024-11-20 11:30:22.315219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.631 [2024-11-20 11:30:22.315285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.631 [2024-11-20 11:30:22.315298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.632 [2024-11-20 11:30:22.315554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.632 [2024-11-20 11:30:22.315783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.632 [2024-11-20 11:30:22.315796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.632 [2024-11-20 11:30:22.315805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.632 [2024-11-20 11:30:22.315815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.632 [2024-11-20 11:30:22.328449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.632 [2024-11-20 11:30:22.329184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.632 [2024-11-20 11:30:22.329251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.632 [2024-11-20 11:30:22.329266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.632 [2024-11-20 11:30:22.329524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.632 [2024-11-20 11:30:22.329753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.632 [2024-11-20 11:30:22.329766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.632 [2024-11-20 11:30:22.329775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.632 [2024-11-20 11:30:22.329785] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.632 8506.33 IOPS, 33.23 MiB/s [2024-11-20T10:30:22.374Z] [2024-11-20 11:30:22.344070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.632 [2024-11-20 11:30:22.344709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.632 [2024-11-20 11:30:22.344739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.632 [2024-11-20 11:30:22.344749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.632 [2024-11-20 11:30:22.344972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.632 [2024-11-20 11:30:22.345205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.632 [2024-11-20 11:30:22.345219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.632 [2024-11-20 11:30:22.345228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.632 [2024-11-20 11:30:22.345237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.632 [2024-11-20 11:30:22.358041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.632 [2024-11-20 11:30:22.358636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.632 [2024-11-20 11:30:22.358663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.632 [2024-11-20 11:30:22.358681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.632 [2024-11-20 11:30:22.358903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.632 [2024-11-20 11:30:22.359127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.632 [2024-11-20 11:30:22.359138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.632 [2024-11-20 11:30:22.359147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.632 [2024-11-20 11:30:22.359156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.894 [2024-11-20 11:30:22.371961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.894 [2024-11-20 11:30:22.372554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.894 [2024-11-20 11:30:22.372579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.894 [2024-11-20 11:30:22.372588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.894 [2024-11-20 11:30:22.372811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.894 [2024-11-20 11:30:22.373034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.894 [2024-11-20 11:30:22.373045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.894 [2024-11-20 11:30:22.373053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.894 [2024-11-20 11:30:22.373061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.894 [2024-11-20 11:30:22.385862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.894 [2024-11-20 11:30:22.386431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.894 [2024-11-20 11:30:22.386456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.894 [2024-11-20 11:30:22.386466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.894 [2024-11-20 11:30:22.386689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.894 [2024-11-20 11:30:22.386911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.894 [2024-11-20 11:30:22.386923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.894 [2024-11-20 11:30:22.386932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.894 [2024-11-20 11:30:22.386940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.894 [2024-11-20 11:30:22.399743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.894 [2024-11-20 11:30:22.400482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.894 [2024-11-20 11:30:22.400547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.894 [2024-11-20 11:30:22.400560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.894 [2024-11-20 11:30:22.400816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.894 [2024-11-20 11:30:22.401044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.894 [2024-11-20 11:30:22.401064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.894 [2024-11-20 11:30:22.401074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.894 [2024-11-20 11:30:22.401083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.894 [2024-11-20 11:30:22.413714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.894 [2024-11-20 11:30:22.414474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.894 [2024-11-20 11:30:22.414538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.895 [2024-11-20 11:30:22.414552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.895 [2024-11-20 11:30:22.414807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.895 [2024-11-20 11:30:22.415036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.895 [2024-11-20 11:30:22.415047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.895 [2024-11-20 11:30:22.415057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.895 [2024-11-20 11:30:22.415067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.895 [2024-11-20 11:30:22.427675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.895 [2024-11-20 11:30:22.428299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.895 [2024-11-20 11:30:22.428365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.895 [2024-11-20 11:30:22.428380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.895 [2024-11-20 11:30:22.428637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.895 [2024-11-20 11:30:22.428865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.895 [2024-11-20 11:30:22.428877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.895 [2024-11-20 11:30:22.428886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.895 [2024-11-20 11:30:22.428896] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.895 [2024-11-20 11:30:22.441527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.895 [2024-11-20 11:30:22.442143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.895 [2024-11-20 11:30:22.442177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.895 [2024-11-20 11:30:22.442187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.895 [2024-11-20 11:30:22.442410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.895 [2024-11-20 11:30:22.442634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.895 [2024-11-20 11:30:22.442646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.895 [2024-11-20 11:30:22.442654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.895 [2024-11-20 11:30:22.442670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.895 [2024-11-20 11:30:22.455471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.895 [2024-11-20 11:30:22.456113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.895 [2024-11-20 11:30:22.456178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.895 [2024-11-20 11:30:22.456191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.895 [2024-11-20 11:30:22.456439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.895 [2024-11-20 11:30:22.456666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.895 [2024-11-20 11:30:22.456677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.895 [2024-11-20 11:30:22.456685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.895 [2024-11-20 11:30:22.456694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.895 [2024-11-20 11:30:22.469282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.895 [2024-11-20 11:30:22.469747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.895 [2024-11-20 11:30:22.469774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.895 [2024-11-20 11:30:22.469783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.895 [2024-11-20 11:30:22.470005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.895 [2024-11-20 11:30:22.470240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.895 [2024-11-20 11:30:22.470255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.895 [2024-11-20 11:30:22.470263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.895 [2024-11-20 11:30:22.470271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.895 [2024-11-20 11:30:22.483271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.895 [2024-11-20 11:30:22.483801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.895 [2024-11-20 11:30:22.483848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.895 [2024-11-20 11:30:22.483860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.895 [2024-11-20 11:30:22.484104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.895 [2024-11-20 11:30:22.484339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.895 [2024-11-20 11:30:22.484353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.895 [2024-11-20 11:30:22.484362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.895 [2024-11-20 11:30:22.484370] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.895 [2024-11-20 11:30:22.497156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.895 [2024-11-20 11:30:22.497850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.895 [2024-11-20 11:30:22.497895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.895 [2024-11-20 11:30:22.497907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.895 [2024-11-20 11:30:22.498150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.895 [2024-11-20 11:30:22.498385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.895 [2024-11-20 11:30:22.498397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.895 [2024-11-20 11:30:22.498405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.895 [2024-11-20 11:30:22.498414] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.895 [2024-11-20 11:30:22.510998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.895 [2024-11-20 11:30:22.511470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.895 [2024-11-20 11:30:22.511492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.895 [2024-11-20 11:30:22.511500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.895 [2024-11-20 11:30:22.511721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.895 [2024-11-20 11:30:22.511941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.895 [2024-11-20 11:30:22.511951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.895 [2024-11-20 11:30:22.511959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.895 [2024-11-20 11:30:22.511966] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.895 [2024-11-20 11:30:22.524941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.895 [2024-11-20 11:30:22.525479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.895 [2024-11-20 11:30:22.525497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.895 [2024-11-20 11:30:22.525505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.895 [2024-11-20 11:30:22.525724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.895 [2024-11-20 11:30:22.525945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.895 [2024-11-20 11:30:22.525955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.895 [2024-11-20 11:30:22.525962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.895 [2024-11-20 11:30:22.525969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.895 [2024-11-20 11:30:22.538748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.895 [2024-11-20 11:30:22.539407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.895 [2024-11-20 11:30:22.539448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.895 [2024-11-20 11:30:22.539459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.895 [2024-11-20 11:30:22.539703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.895 [2024-11-20 11:30:22.539928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.895 [2024-11-20 11:30:22.539939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.895 [2024-11-20 11:30:22.539947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.895 [2024-11-20 11:30:22.539955] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.895 [2024-11-20 11:30:22.552730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.895 [2024-11-20 11:30:22.553309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.896 [2024-11-20 11:30:22.553330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.896 [2024-11-20 11:30:22.553338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.896 [2024-11-20 11:30:22.553558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.896 [2024-11-20 11:30:22.553778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.896 [2024-11-20 11:30:22.553787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.896 [2024-11-20 11:30:22.553795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.896 [2024-11-20 11:30:22.553802] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.896 [2024-11-20 11:30:22.566566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.896 [2024-11-20 11:30:22.567236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.896 [2024-11-20 11:30:22.567275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.896 [2024-11-20 11:30:22.567286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.896 [2024-11-20 11:30:22.567525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.896 [2024-11-20 11:30:22.567749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.896 [2024-11-20 11:30:22.567758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.896 [2024-11-20 11:30:22.567766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.896 [2024-11-20 11:30:22.567774] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.896 [2024-11-20 11:30:22.580554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.896 [2024-11-20 11:30:22.581236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.896 [2024-11-20 11:30:22.581275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.896 [2024-11-20 11:30:22.581285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.896 [2024-11-20 11:30:22.581524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.896 [2024-11-20 11:30:22.581748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.896 [2024-11-20 11:30:22.581762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.896 [2024-11-20 11:30:22.581770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.896 [2024-11-20 11:30:22.581778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.896 [2024-11-20 11:30:22.594707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.896 [2024-11-20 11:30:22.595434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.896 [2024-11-20 11:30:22.595473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.896 [2024-11-20 11:30:22.595484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.896 [2024-11-20 11:30:22.595722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.896 [2024-11-20 11:30:22.595947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.896 [2024-11-20 11:30:22.595958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.896 [2024-11-20 11:30:22.595966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.896 [2024-11-20 11:30:22.595974] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.896 [2024-11-20 11:30:22.608541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.896 [2024-11-20 11:30:22.609080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.896 [2024-11-20 11:30:22.609101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.896 [2024-11-20 11:30:22.609110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.896 [2024-11-20 11:30:22.609336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.896 [2024-11-20 11:30:22.609557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.896 [2024-11-20 11:30:22.609567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.896 [2024-11-20 11:30:22.609575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.896 [2024-11-20 11:30:22.609583] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:29.896 [2024-11-20 11:30:22.622343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:29.896 [2024-11-20 11:30:22.622992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.896 [2024-11-20 11:30:22.623031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:29.896 [2024-11-20 11:30:22.623042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:29.896 [2024-11-20 11:30:22.623289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:29.896 [2024-11-20 11:30:22.623513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:29.896 [2024-11-20 11:30:22.623524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:29.896 [2024-11-20 11:30:22.623532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:29.896 [2024-11-20 11:30:22.623545] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.159 [2024-11-20 11:30:22.636324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.159 [2024-11-20 11:30:22.636901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-11-20 11:30:22.636921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.159 [2024-11-20 11:30:22.636929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.159 [2024-11-20 11:30:22.637149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.159 [2024-11-20 11:30:22.637388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.159 [2024-11-20 11:30:22.637399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.159 [2024-11-20 11:30:22.637407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.159 [2024-11-20 11:30:22.637414] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.159 [2024-11-20 11:30:22.650170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.159 [2024-11-20 11:30:22.650833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-11-20 11:30:22.650873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.159 [2024-11-20 11:30:22.650884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.159 [2024-11-20 11:30:22.651124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.159 [2024-11-20 11:30:22.651359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.159 [2024-11-20 11:30:22.651370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.159 [2024-11-20 11:30:22.651378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.159 [2024-11-20 11:30:22.651386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.159 [2024-11-20 11:30:22.664149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.159 [2024-11-20 11:30:22.664736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-11-20 11:30:22.664776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.159 [2024-11-20 11:30:22.664788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.159 [2024-11-20 11:30:22.665027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.159 [2024-11-20 11:30:22.665262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.159 [2024-11-20 11:30:22.665273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.159 [2024-11-20 11:30:22.665281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.159 [2024-11-20 11:30:22.665289] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.159 [2024-11-20 11:30:22.678056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.159 [2024-11-20 11:30:22.678732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-11-20 11:30:22.678779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.159 [2024-11-20 11:30:22.678790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.159 [2024-11-20 11:30:22.679032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.159 [2024-11-20 11:30:22.679268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.159 [2024-11-20 11:30:22.679279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.159 [2024-11-20 11:30:22.679288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.159 [2024-11-20 11:30:22.679296] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.159 [2024-11-20 11:30:22.691946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.159 [2024-11-20 11:30:22.692639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-11-20 11:30:22.692682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.159 [2024-11-20 11:30:22.692694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.159 [2024-11-20 11:30:22.692936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.159 [2024-11-20 11:30:22.693171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.159 [2024-11-20 11:30:22.693182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.159 [2024-11-20 11:30:22.693191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.160 [2024-11-20 11:30:22.693199] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.160 [2024-11-20 11:30:22.705773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.160 [2024-11-20 11:30:22.706463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-11-20 11:30:22.706507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.160 [2024-11-20 11:30:22.706518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.160 [2024-11-20 11:30:22.706760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.160 [2024-11-20 11:30:22.706985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.160 [2024-11-20 11:30:22.706995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.160 [2024-11-20 11:30:22.707003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.160 [2024-11-20 11:30:22.707011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.160 [2024-11-20 11:30:22.719786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.160 [2024-11-20 11:30:22.720373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-11-20 11:30:22.720396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.160 [2024-11-20 11:30:22.720405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.160 [2024-11-20 11:30:22.720631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.160 [2024-11-20 11:30:22.720853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.160 [2024-11-20 11:30:22.720863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.160 [2024-11-20 11:30:22.720871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.160 [2024-11-20 11:30:22.720878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.160 [2024-11-20 11:30:22.733641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.160 [2024-11-20 11:30:22.734170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-11-20 11:30:22.734190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.160 [2024-11-20 11:30:22.734198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.160 [2024-11-20 11:30:22.734418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.160 [2024-11-20 11:30:22.734639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.160 [2024-11-20 11:30:22.734649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.160 [2024-11-20 11:30:22.734657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.160 [2024-11-20 11:30:22.734664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.160 [2024-11-20 11:30:22.747634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.160 [2024-11-20 11:30:22.748217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-11-20 11:30:22.748246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.160 [2024-11-20 11:30:22.748254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.160 [2024-11-20 11:30:22.748482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.160 [2024-11-20 11:30:22.748704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.160 [2024-11-20 11:30:22.748715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.160 [2024-11-20 11:30:22.748723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.160 [2024-11-20 11:30:22.748732] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.160 [2024-11-20 11:30:22.761523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.160 [2024-11-20 11:30:22.762045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-11-20 11:30:22.762092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.160 [2024-11-20 11:30:22.762104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.160 [2024-11-20 11:30:22.762361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.160 [2024-11-20 11:30:22.762588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.160 [2024-11-20 11:30:22.762603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.160 [2024-11-20 11:30:22.762612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.160 [2024-11-20 11:30:22.762620] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.160 [2024-11-20 11:30:22.775421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.160 [2024-11-20 11:30:22.776071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-11-20 11:30:22.776119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.160 [2024-11-20 11:30:22.776131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.160 [2024-11-20 11:30:22.776390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.160 [2024-11-20 11:30:22.776617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.160 [2024-11-20 11:30:22.776627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.160 [2024-11-20 11:30:22.776635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.160 [2024-11-20 11:30:22.776644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.160 [2024-11-20 11:30:22.789423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.160 [2024-11-20 11:30:22.790082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-11-20 11:30:22.790133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.160 [2024-11-20 11:30:22.790146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.160 [2024-11-20 11:30:22.790402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.160 [2024-11-20 11:30:22.790629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.160 [2024-11-20 11:30:22.790639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.160 [2024-11-20 11:30:22.790648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.160 [2024-11-20 11:30:22.790657] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.160 [2024-11-20 11:30:22.803442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.160 [2024-11-20 11:30:22.804059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-11-20 11:30:22.804086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.160 [2024-11-20 11:30:22.804095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.160 [2024-11-20 11:30:22.804324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.160 [2024-11-20 11:30:22.804547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.160 [2024-11-20 11:30:22.804558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.161 [2024-11-20 11:30:22.804567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.161 [2024-11-20 11:30:22.804575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.161 [2024-11-20 11:30:22.817371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.161 [2024-11-20 11:30:22.817837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-11-20 11:30:22.817861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.161 [2024-11-20 11:30:22.817870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.161 [2024-11-20 11:30:22.818092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.161 [2024-11-20 11:30:22.818323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.161 [2024-11-20 11:30:22.818334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.161 [2024-11-20 11:30:22.818343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.161 [2024-11-20 11:30:22.818351] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.161 [2024-11-20 11:30:22.831333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.161 [2024-11-20 11:30:22.832027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-11-20 11:30:22.832091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.161 [2024-11-20 11:30:22.832104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.161 [2024-11-20 11:30:22.832371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.161 [2024-11-20 11:30:22.832600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.161 [2024-11-20 11:30:22.832613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.161 [2024-11-20 11:30:22.832622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.161 [2024-11-20 11:30:22.832632] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.161 [2024-11-20 11:30:22.845243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.161 [2024-11-20 11:30:22.845929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-11-20 11:30:22.845994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.161 [2024-11-20 11:30:22.846007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.161 [2024-11-20 11:30:22.846277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.161 [2024-11-20 11:30:22.846507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.161 [2024-11-20 11:30:22.846519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.161 [2024-11-20 11:30:22.846528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.161 [2024-11-20 11:30:22.846538] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.161 [2024-11-20 11:30:22.859117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.161 [2024-11-20 11:30:22.859920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-11-20 11:30:22.859993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.161 [2024-11-20 11:30:22.860007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.161 [2024-11-20 11:30:22.860280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.161 [2024-11-20 11:30:22.860510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.161 [2024-11-20 11:30:22.860521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.161 [2024-11-20 11:30:22.860530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.161 [2024-11-20 11:30:22.860539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.161 [2024-11-20 11:30:22.872934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.161 [2024-11-20 11:30:22.873528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-11-20 11:30:22.873561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.161 [2024-11-20 11:30:22.873570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.161 [2024-11-20 11:30:22.873794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.161 [2024-11-20 11:30:22.874019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.161 [2024-11-20 11:30:22.874030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.161 [2024-11-20 11:30:22.874041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.161 [2024-11-20 11:30:22.874049] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.161 [2024-11-20 11:30:22.886821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.161 [2024-11-20 11:30:22.887530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-11-20 11:30:22.887594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.161 [2024-11-20 11:30:22.887606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.161 [2024-11-20 11:30:22.887862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.161 [2024-11-20 11:30:22.888091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.161 [2024-11-20 11:30:22.888103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.161 [2024-11-20 11:30:22.888112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.161 [2024-11-20 11:30:22.888123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.424 [2024-11-20 11:30:22.900741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.424 [2024-11-20 11:30:22.901342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 11:30:22.901375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.424 [2024-11-20 11:30:22.901385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.424 [2024-11-20 11:30:22.901618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.424 [2024-11-20 11:30:22.901843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.424 [2024-11-20 11:30:22.901858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.424 [2024-11-20 11:30:22.901867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.424 [2024-11-20 11:30:22.901875] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.424 [2024-11-20 11:30:22.914697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.424 [2024-11-20 11:30:22.915301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 11:30:22.915366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.424 [2024-11-20 11:30:22.915380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.424 [2024-11-20 11:30:22.915637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.424 [2024-11-20 11:30:22.915866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.424 [2024-11-20 11:30:22.915879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.424 [2024-11-20 11:30:22.915888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.424 [2024-11-20 11:30:22.915897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.424 [2024-11-20 11:30:22.928425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.424 [2024-11-20 11:30:22.929035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 11:30:22.929093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.424 [2024-11-20 11:30:22.929103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.424 [2024-11-20 11:30:22.929302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.424 [2024-11-20 11:30:22.929462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.424 [2024-11-20 11:30:22.929471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.424 [2024-11-20 11:30:22.929477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.424 [2024-11-20 11:30:22.929485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.424 [2024-11-20 11:30:22.941151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.424 [2024-11-20 11:30:22.941749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 11:30:22.941802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.424 [2024-11-20 11:30:22.941812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.424 [2024-11-20 11:30:22.941994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.424 [2024-11-20 11:30:22.942152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.424 [2024-11-20 11:30:22.942172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.424 [2024-11-20 11:30:22.942192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.424 [2024-11-20 11:30:22.942200] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.424 [2024-11-20 11:30:22.953843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.424 [2024-11-20 11:30:22.954494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 11:30:22.954545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.424 [2024-11-20 11:30:22.954555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.424 [2024-11-20 11:30:22.954733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.424 [2024-11-20 11:30:22.954892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.424 [2024-11-20 11:30:22.954900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.424 [2024-11-20 11:30:22.954906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.424 [2024-11-20 11:30:22.954915] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.424 [2024-11-20 11:30:22.966572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.424 [2024-11-20 11:30:22.967172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 11:30:22.967219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.424 [2024-11-20 11:30:22.967229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.424 [2024-11-20 11:30:22.967409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.424 [2024-11-20 11:30:22.967566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.424 [2024-11-20 11:30:22.967574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.424 [2024-11-20 11:30:22.967581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.424 [2024-11-20 11:30:22.967587] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.424 [2024-11-20 11:30:22.979232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.424 [2024-11-20 11:30:22.979822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 11:30:22.979866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.424 [2024-11-20 11:30:22.979875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.424 [2024-11-20 11:30:22.980051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.424 [2024-11-20 11:30:22.980218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.424 [2024-11-20 11:30:22.980227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.424 [2024-11-20 11:30:22.980233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.424 [2024-11-20 11:30:22.980240] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.424 [2024-11-20 11:30:22.991874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.424 [2024-11-20 11:30:22.992519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.424 [2024-11-20 11:30:22.992560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.424 [2024-11-20 11:30:22.992569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.424 [2024-11-20 11:30:22.992741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.425 [2024-11-20 11:30:22.992897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.425 [2024-11-20 11:30:22.992906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.425 [2024-11-20 11:30:22.992913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.425 [2024-11-20 11:30:22.992920] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.425 [2024-11-20 11:30:23.004564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.425 [2024-11-20 11:30:23.005141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 11:30:23.005184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.425 [2024-11-20 11:30:23.005194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.425 [2024-11-20 11:30:23.005367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.425 [2024-11-20 11:30:23.005522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.425 [2024-11-20 11:30:23.005530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.425 [2024-11-20 11:30:23.005537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.425 [2024-11-20 11:30:23.005544] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.425 [2024-11-20 11:30:23.017313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.425 [2024-11-20 11:30:23.017908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 11:30:23.017944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.425 [2024-11-20 11:30:23.017953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.425 [2024-11-20 11:30:23.018123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.425 [2024-11-20 11:30:23.018288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.425 [2024-11-20 11:30:23.018296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.425 [2024-11-20 11:30:23.018302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.425 [2024-11-20 11:30:23.018309] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.425 [2024-11-20 11:30:23.030076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.425 [2024-11-20 11:30:23.030686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 11:30:23.030725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.425 [2024-11-20 11:30:23.030733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.425 [2024-11-20 11:30:23.030902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.425 [2024-11-20 11:30:23.031057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.425 [2024-11-20 11:30:23.031064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.425 [2024-11-20 11:30:23.031070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.425 [2024-11-20 11:30:23.031076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.425 [2024-11-20 11:30:23.042709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.425 [2024-11-20 11:30:23.043369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 11:30:23.043403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.425 [2024-11-20 11:30:23.043412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.425 [2024-11-20 11:30:23.043580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.425 [2024-11-20 11:30:23.043735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.425 [2024-11-20 11:30:23.043742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.425 [2024-11-20 11:30:23.043747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.425 [2024-11-20 11:30:23.043753] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.425 [2024-11-20 11:30:23.055376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.425 [2024-11-20 11:30:23.055960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 11:30:23.055993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.425 [2024-11-20 11:30:23.056002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.425 [2024-11-20 11:30:23.056176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.425 [2024-11-20 11:30:23.056331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.425 [2024-11-20 11:30:23.056339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.425 [2024-11-20 11:30:23.056345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.425 [2024-11-20 11:30:23.056351] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.425 [2024-11-20 11:30:23.068136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.425 [2024-11-20 11:30:23.068611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 11:30:23.068644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.425 [2024-11-20 11:30:23.068653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.425 [2024-11-20 11:30:23.068821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.425 [2024-11-20 11:30:23.068980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.425 [2024-11-20 11:30:23.068987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.425 [2024-11-20 11:30:23.068993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.425 [2024-11-20 11:30:23.068999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.425 [2024-11-20 11:30:23.080764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.425 [2024-11-20 11:30:23.081282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 11:30:23.081314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.425 [2024-11-20 11:30:23.081323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.425 [2024-11-20 11:30:23.081489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.425 [2024-11-20 11:30:23.081643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.425 [2024-11-20 11:30:23.081650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.425 [2024-11-20 11:30:23.081656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.425 [2024-11-20 11:30:23.081662] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.425 [2024-11-20 11:30:23.093424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.425 [2024-11-20 11:30:23.094024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 11:30:23.094055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.425 [2024-11-20 11:30:23.094064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.425 [2024-11-20 11:30:23.094237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.425 [2024-11-20 11:30:23.094392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.425 [2024-11-20 11:30:23.094399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.425 [2024-11-20 11:30:23.094405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.425 [2024-11-20 11:30:23.094411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.425 [2024-11-20 11:30:23.106179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.425 [2024-11-20 11:30:23.106787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 11:30:23.106818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.425 [2024-11-20 11:30:23.106827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.425 [2024-11-20 11:30:23.106994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.425 [2024-11-20 11:30:23.107148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.425 [2024-11-20 11:30:23.107155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.425 [2024-11-20 11:30:23.107173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.425 [2024-11-20 11:30:23.107179] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.425 [2024-11-20 11:30:23.118797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.425 [2024-11-20 11:30:23.119297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.425 [2024-11-20 11:30:23.119328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.425 [2024-11-20 11:30:23.119337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.426 [2024-11-20 11:30:23.119506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.426 [2024-11-20 11:30:23.119660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.426 [2024-11-20 11:30:23.119668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.426 [2024-11-20 11:30:23.119674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.426 [2024-11-20 11:30:23.119680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.426 [2024-11-20 11:30:23.131452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.426 [2024-11-20 11:30:23.132039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 11:30:23.132070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.426 [2024-11-20 11:30:23.132079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.426 [2024-11-20 11:30:23.132251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.426 [2024-11-20 11:30:23.132407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.426 [2024-11-20 11:30:23.132413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.426 [2024-11-20 11:30:23.132419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.426 [2024-11-20 11:30:23.132425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.426 [2024-11-20 11:30:23.144193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.426 [2024-11-20 11:30:23.144802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 11:30:23.144834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.426 [2024-11-20 11:30:23.144842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.426 [2024-11-20 11:30:23.145009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.426 [2024-11-20 11:30:23.145170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.426 [2024-11-20 11:30:23.145178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.426 [2024-11-20 11:30:23.145183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.426 [2024-11-20 11:30:23.145189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.426 [2024-11-20 11:30:23.156804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.426 [2024-11-20 11:30:23.157290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.426 [2024-11-20 11:30:23.157321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.426 [2024-11-20 11:30:23.157330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.426 [2024-11-20 11:30:23.157499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.426 [2024-11-20 11:30:23.157653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.426 [2024-11-20 11:30:23.157660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.426 [2024-11-20 11:30:23.157666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.426 [2024-11-20 11:30:23.157671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.687 [2024-11-20 11:30:23.169441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.687 [2024-11-20 11:30:23.170039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-11-20 11:30:23.170070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.687 [2024-11-20 11:30:23.170078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.687 [2024-11-20 11:30:23.170253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.687 [2024-11-20 11:30:23.170408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.687 [2024-11-20 11:30:23.170415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.687 [2024-11-20 11:30:23.170421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.687 [2024-11-20 11:30:23.170427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.687 [2024-11-20 11:30:23.182185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.687 [2024-11-20 11:30:23.182655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-11-20 11:30:23.182686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.687 [2024-11-20 11:30:23.182695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.687 [2024-11-20 11:30:23.182863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.687 [2024-11-20 11:30:23.183017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.687 [2024-11-20 11:30:23.183024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.687 [2024-11-20 11:30:23.183030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.687 [2024-11-20 11:30:23.183036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.687 [2024-11-20 11:30:23.194798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.687 [2024-11-20 11:30:23.195443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-11-20 11:30:23.195474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.687 [2024-11-20 11:30:23.195486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.687 [2024-11-20 11:30:23.195653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.687 [2024-11-20 11:30:23.195807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.687 [2024-11-20 11:30:23.195814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.687 [2024-11-20 11:30:23.195820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.687 [2024-11-20 11:30:23.195825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.687 [2024-11-20 11:30:23.207450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.687 [2024-11-20 11:30:23.208027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-11-20 11:30:23.208059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.687 [2024-11-20 11:30:23.208067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.687 [2024-11-20 11:30:23.208240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.687 [2024-11-20 11:30:23.208395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.687 [2024-11-20 11:30:23.208402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.687 [2024-11-20 11:30:23.208408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.687 [2024-11-20 11:30:23.208414] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.687 [2024-11-20 11:30:23.220170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.687 [2024-11-20 11:30:23.220777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-11-20 11:30:23.220809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.687 [2024-11-20 11:30:23.220817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.687 [2024-11-20 11:30:23.220984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.687 [2024-11-20 11:30:23.221138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.687 [2024-11-20 11:30:23.221145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.687 [2024-11-20 11:30:23.221150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.687 [2024-11-20 11:30:23.221156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.687 [2024-11-20 11:30:23.232921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.687 [2024-11-20 11:30:23.233485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-11-20 11:30:23.233516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.687 [2024-11-20 11:30:23.233525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.687 [2024-11-20 11:30:23.233691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.687 [2024-11-20 11:30:23.233849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.687 [2024-11-20 11:30:23.233856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.687 [2024-11-20 11:30:23.233862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.687 [2024-11-20 11:30:23.233867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.687 [2024-11-20 11:30:23.245633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.687 [2024-11-20 11:30:23.246208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-11-20 11:30:23.246238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.687 [2024-11-20 11:30:23.246247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.687 [2024-11-20 11:30:23.246416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.687 [2024-11-20 11:30:23.246570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.687 [2024-11-20 11:30:23.246577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.687 [2024-11-20 11:30:23.246583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.687 [2024-11-20 11:30:23.246588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.687 [2024-11-20 11:30:23.258351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.687 [2024-11-20 11:30:23.258946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-11-20 11:30:23.258978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.687 [2024-11-20 11:30:23.258986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.687 [2024-11-20 11:30:23.259153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.687 [2024-11-20 11:30:23.259314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.687 [2024-11-20 11:30:23.259322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.687 [2024-11-20 11:30:23.259328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.687 [2024-11-20 11:30:23.259334] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.687 [2024-11-20 11:30:23.271090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.687 [2024-11-20 11:30:23.271692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.687 [2024-11-20 11:30:23.271724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.687 [2024-11-20 11:30:23.271733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.687 [2024-11-20 11:30:23.271900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.687 [2024-11-20 11:30:23.272054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.687 [2024-11-20 11:30:23.272062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.687 [2024-11-20 11:30:23.272071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.687 [2024-11-20 11:30:23.272076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.687 [2024-11-20 11:30:23.283841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.688 [2024-11-20 11:30:23.284460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-11-20 11:30:23.284491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.688 [2024-11-20 11:30:23.284500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.688 [2024-11-20 11:30:23.284667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.688 [2024-11-20 11:30:23.284821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.688 [2024-11-20 11:30:23.284828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.688 [2024-11-20 11:30:23.284834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.688 [2024-11-20 11:30:23.284839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.688 [2024-11-20 11:30:23.296452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.688 [2024-11-20 11:30:23.297020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-11-20 11:30:23.297051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.688 [2024-11-20 11:30:23.297059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.688 [2024-11-20 11:30:23.297234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.688 [2024-11-20 11:30:23.297389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.688 [2024-11-20 11:30:23.297397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.688 [2024-11-20 11:30:23.297403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.688 [2024-11-20 11:30:23.297409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.688 [2024-11-20 11:30:23.309173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.688 [2024-11-20 11:30:23.309769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-11-20 11:30:23.309800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.688 [2024-11-20 11:30:23.309808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.688 [2024-11-20 11:30:23.309975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.688 [2024-11-20 11:30:23.310129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.688 [2024-11-20 11:30:23.310136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.688 [2024-11-20 11:30:23.310142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.688 [2024-11-20 11:30:23.310148] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.688 [2024-11-20 11:30:23.321911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.688 [2024-11-20 11:30:23.322555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-11-20 11:30:23.322586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.688 [2024-11-20 11:30:23.322595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.688 [2024-11-20 11:30:23.322761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.688 [2024-11-20 11:30:23.322916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.688 [2024-11-20 11:30:23.322923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.688 [2024-11-20 11:30:23.322929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.688 [2024-11-20 11:30:23.322934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.688 [2024-11-20 11:30:23.334556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.688 [2024-11-20 11:30:23.335010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-11-20 11:30:23.335026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.688 [2024-11-20 11:30:23.335032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.688 [2024-11-20 11:30:23.335188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.688 [2024-11-20 11:30:23.335341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.688 [2024-11-20 11:30:23.335347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.688 [2024-11-20 11:30:23.335353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.688 [2024-11-20 11:30:23.335357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.688 6379.75 IOPS, 24.92 MiB/s [2024-11-20T10:30:23.430Z] [2024-11-20 11:30:23.347247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.688 [2024-11-20 11:30:23.347794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-11-20 11:30:23.347826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.688 [2024-11-20 11:30:23.347834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.688 [2024-11-20 11:30:23.348001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.688 [2024-11-20 11:30:23.348155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.688 [2024-11-20 11:30:23.348170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.688 [2024-11-20 11:30:23.348176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.688 [2024-11-20 11:30:23.348182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.688 [2024-11-20 11:30:23.359936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.688 [2024-11-20 11:30:23.360536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-11-20 11:30:23.360567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.688 [2024-11-20 11:30:23.360578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.688 [2024-11-20 11:30:23.360745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.688 [2024-11-20 11:30:23.360899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.688 [2024-11-20 11:30:23.360906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.688 [2024-11-20 11:30:23.360912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.688 [2024-11-20 11:30:23.360918] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.688 [2024-11-20 11:30:23.372678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.688 [2024-11-20 11:30:23.373260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-11-20 11:30:23.373292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.688 [2024-11-20 11:30:23.373301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.688 [2024-11-20 11:30:23.373469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.688 [2024-11-20 11:30:23.373624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.688 [2024-11-20 11:30:23.373631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.688 [2024-11-20 11:30:23.373637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.688 [2024-11-20 11:30:23.373643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.688 [2024-11-20 11:30:23.385418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.688 [2024-11-20 11:30:23.386020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.688 [2024-11-20 11:30:23.386051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.688 [2024-11-20 11:30:23.386060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.688 [2024-11-20 11:30:23.386235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.688 [2024-11-20 11:30:23.386390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.688 [2024-11-20 11:30:23.386397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.688 [2024-11-20 11:30:23.386403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.688 [2024-11-20 11:30:23.386409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.688 [2024-11-20 11:30:23.398026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.688 [2024-11-20 11:30:23.398525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-11-20 11:30:23.398541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.689 [2024-11-20 11:30:23.398547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.689 [2024-11-20 11:30:23.398698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.689 [2024-11-20 11:30:23.398854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.689 [2024-11-20 11:30:23.398861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.689 [2024-11-20 11:30:23.398866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.689 [2024-11-20 11:30:23.398871] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.689 [2024-11-20 11:30:23.410777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.689 [2024-11-20 11:30:23.411276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-11-20 11:30:23.411308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.689 [2024-11-20 11:30:23.411316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.689 [2024-11-20 11:30:23.411483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.689 [2024-11-20 11:30:23.411637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.689 [2024-11-20 11:30:23.411644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.689 [2024-11-20 11:30:23.411650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.689 [2024-11-20 11:30:23.411656] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.689 [2024-11-20 11:30:23.423425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.689 [2024-11-20 11:30:23.424002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.689 [2024-11-20 11:30:23.424034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.689 [2024-11-20 11:30:23.424043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.689 [2024-11-20 11:30:23.424218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.689 [2024-11-20 11:30:23.424374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.689 [2024-11-20 11:30:23.424381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.689 [2024-11-20 11:30:23.424388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.689 [2024-11-20 11:30:23.424394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.950 [2024-11-20 11:30:23.436156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.950 [2024-11-20 11:30:23.436630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-20 11:30:23.436661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.950 [2024-11-20 11:30:23.436670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.950 [2024-11-20 11:30:23.436838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.950 [2024-11-20 11:30:23.436993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.951 [2024-11-20 11:30:23.437000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.951 [2024-11-20 11:30:23.437009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.951 [2024-11-20 11:30:23.437015] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.951 [2024-11-20 11:30:23.448791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.951 [2024-11-20 11:30:23.449299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.951 [2024-11-20 11:30:23.449331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.951 [2024-11-20 11:30:23.449339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.951 [2024-11-20 11:30:23.449509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.951 [2024-11-20 11:30:23.449663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.951 [2024-11-20 11:30:23.449670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.951 [2024-11-20 11:30:23.449675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.951 [2024-11-20 11:30:23.449682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.951 [2024-11-20 11:30:23.461446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.951 [2024-11-20 11:30:23.461928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.951 [2024-11-20 11:30:23.461958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.951 [2024-11-20 11:30:23.461967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.951 [2024-11-20 11:30:23.462134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.951 [2024-11-20 11:30:23.462294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.951 [2024-11-20 11:30:23.462302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.951 [2024-11-20 11:30:23.462308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.951 [2024-11-20 11:30:23.462315] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.951 [2024-11-20 11:30:23.474075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.951 [2024-11-20 11:30:23.474582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.951 [2024-11-20 11:30:23.474598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.951 [2024-11-20 11:30:23.474604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.951 [2024-11-20 11:30:23.474755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.951 [2024-11-20 11:30:23.474906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.951 [2024-11-20 11:30:23.474913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.951 [2024-11-20 11:30:23.474919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.951 [2024-11-20 11:30:23.474924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.951 [2024-11-20 11:30:23.486822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.951 [2024-11-20 11:30:23.487302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.951 [2024-11-20 11:30:23.487334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.951 [2024-11-20 11:30:23.487342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.951 [2024-11-20 11:30:23.487512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.951 [2024-11-20 11:30:23.487666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.951 [2024-11-20 11:30:23.487673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.951 [2024-11-20 11:30:23.487678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.951 [2024-11-20 11:30:23.487684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.951 [2024-11-20 11:30:23.499452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.951 [2024-11-20 11:30:23.500048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.951 [2024-11-20 11:30:23.500079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.951 [2024-11-20 11:30:23.500088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.951 [2024-11-20 11:30:23.500262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.951 [2024-11-20 11:30:23.500417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.951 [2024-11-20 11:30:23.500424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.951 [2024-11-20 11:30:23.500430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.951 [2024-11-20 11:30:23.500436] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.951 [2024-11-20 11:30:23.512202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.951 [2024-11-20 11:30:23.512753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.951 [2024-11-20 11:30:23.512784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.951 [2024-11-20 11:30:23.512793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.951 [2024-11-20 11:30:23.512960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.951 [2024-11-20 11:30:23.513114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.951 [2024-11-20 11:30:23.513121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.951 [2024-11-20 11:30:23.513127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.951 [2024-11-20 11:30:23.513132] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.951 [2024-11-20 11:30:23.524894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.951 [2024-11-20 11:30:23.525441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.951 [2024-11-20 11:30:23.525473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.951 [2024-11-20 11:30:23.525484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.951 [2024-11-20 11:30:23.525652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.951 [2024-11-20 11:30:23.525806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.951 [2024-11-20 11:30:23.525813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.951 [2024-11-20 11:30:23.525819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.951 [2024-11-20 11:30:23.525824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.951 [2024-11-20 11:30:23.537591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.951 [2024-11-20 11:30:23.538182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.951 [2024-11-20 11:30:23.538213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.951 [2024-11-20 11:30:23.538222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.951 [2024-11-20 11:30:23.538388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.951 [2024-11-20 11:30:23.538542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.951 [2024-11-20 11:30:23.538550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.951 [2024-11-20 11:30:23.538555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.951 [2024-11-20 11:30:23.538561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.951 [2024-11-20 11:30:23.550334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.951 [2024-11-20 11:30:23.550905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.951 [2024-11-20 11:30:23.550936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.951 [2024-11-20 11:30:23.550944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.951 [2024-11-20 11:30:23.551111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.951 [2024-11-20 11:30:23.551272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.951 [2024-11-20 11:30:23.551280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.951 [2024-11-20 11:30:23.551286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.951 [2024-11-20 11:30:23.551292] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.951 [2024-11-20 11:30:23.563049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.951 [2024-11-20 11:30:23.563538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.951 [2024-11-20 11:30:23.563570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.952 [2024-11-20 11:30:23.563579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.952 [2024-11-20 11:30:23.563746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.952 [2024-11-20 11:30:23.563907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.952 [2024-11-20 11:30:23.563915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.952 [2024-11-20 11:30:23.563921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.952 [2024-11-20 11:30:23.563927] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.952 [2024-11-20 11:30:23.575695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.952 [2024-11-20 11:30:23.576249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.952 [2024-11-20 11:30:23.576280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.952 [2024-11-20 11:30:23.576289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.952 [2024-11-20 11:30:23.576458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.952 [2024-11-20 11:30:23.576612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.952 [2024-11-20 11:30:23.576619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.952 [2024-11-20 11:30:23.576625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.952 [2024-11-20 11:30:23.576630] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.952 [2024-11-20 11:30:23.588395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.952 [2024-11-20 11:30:23.588849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.952 [2024-11-20 11:30:23.588865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.952 [2024-11-20 11:30:23.588871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.952 [2024-11-20 11:30:23.589022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.952 [2024-11-20 11:30:23.589181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.952 [2024-11-20 11:30:23.589188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.952 [2024-11-20 11:30:23.589193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.952 [2024-11-20 11:30:23.589198] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.952 [2024-11-20 11:30:23.601128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.952 [2024-11-20 11:30:23.601686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.952 [2024-11-20 11:30:23.601717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.952 [2024-11-20 11:30:23.601726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.952 [2024-11-20 11:30:23.601893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.952 [2024-11-20 11:30:23.602047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.952 [2024-11-20 11:30:23.602055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.952 [2024-11-20 11:30:23.602067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.952 [2024-11-20 11:30:23.602074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.952 [2024-11-20 11:30:23.613851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.952 [2024-11-20 11:30:23.614323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.952 [2024-11-20 11:30:23.614339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.952 [2024-11-20 11:30:23.614346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.952 [2024-11-20 11:30:23.614497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.952 [2024-11-20 11:30:23.614649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.952 [2024-11-20 11:30:23.614656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.952 [2024-11-20 11:30:23.614661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.952 [2024-11-20 11:30:23.614666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.952 [2024-11-20 11:30:23.626569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.952 [2024-11-20 11:30:23.627033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.952 [2024-11-20 11:30:23.627047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.952 [2024-11-20 11:30:23.627052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.952 [2024-11-20 11:30:23.627207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.952 [2024-11-20 11:30:23.627359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.952 [2024-11-20 11:30:23.627365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.952 [2024-11-20 11:30:23.627371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.952 [2024-11-20 11:30:23.627376] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.952 [2024-11-20 11:30:23.639284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.952 [2024-11-20 11:30:23.639726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.952 [2024-11-20 11:30:23.639740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.952 [2024-11-20 11:30:23.639745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.952 [2024-11-20 11:30:23.639896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.952 [2024-11-20 11:30:23.640047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.952 [2024-11-20 11:30:23.640053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.952 [2024-11-20 11:30:23.640059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.952 [2024-11-20 11:30:23.640063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.952 [2024-11-20 11:30:23.652032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.952 [2024-11-20 11:30:23.652460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.952 [2024-11-20 11:30:23.652492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.952 [2024-11-20 11:30:23.652500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.952 [2024-11-20 11:30:23.652667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.952 [2024-11-20 11:30:23.652821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.952 [2024-11-20 11:30:23.652828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.952 [2024-11-20 11:30:23.652834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.952 [2024-11-20 11:30:23.652840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.952 [2024-11-20 11:30:23.664755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.952 [2024-11-20 11:30:23.665213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.952 [2024-11-20 11:30:23.665235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.952 [2024-11-20 11:30:23.665241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.952 [2024-11-20 11:30:23.665398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.952 [2024-11-20 11:30:23.665550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.952 [2024-11-20 11:30:23.665556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.952 [2024-11-20 11:30:23.665562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.952 [2024-11-20 11:30:23.665567] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.952 [2024-11-20 11:30:23.677481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.952 [2024-11-20 11:30:23.678060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.952 [2024-11-20 11:30:23.678092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:30.952 [2024-11-20 11:30:23.678101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:30.952 [2024-11-20 11:30:23.678277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:30.952 [2024-11-20 11:30:23.678431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.952 [2024-11-20 11:30:23.678439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.952 [2024-11-20 11:30:23.678444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.952 [2024-11-20 11:30:23.678450] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.215 [2024-11-20 11:30:23.690222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.216 [2024-11-20 11:30:23.690733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.216 [2024-11-20 11:30:23.690749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.216 [2024-11-20 11:30:23.690759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.216 [2024-11-20 11:30:23.690910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.216 [2024-11-20 11:30:23.691062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.216 [2024-11-20 11:30:23.691069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.216 [2024-11-20 11:30:23.691074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.216 [2024-11-20 11:30:23.691079] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.216 [2024-11-20 11:30:23.702863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.216 [2024-11-20 11:30:23.703446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.216 [2024-11-20 11:30:23.703478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.216 [2024-11-20 11:30:23.703487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.216 [2024-11-20 11:30:23.703654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.216 [2024-11-20 11:30:23.703808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.216 [2024-11-20 11:30:23.703815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.216 [2024-11-20 11:30:23.703821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.216 [2024-11-20 11:30:23.703827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.216 [2024-11-20 11:30:23.715532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.216 [2024-11-20 11:30:23.715983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.216 [2024-11-20 11:30:23.716014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.216 [2024-11-20 11:30:23.716022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.216 [2024-11-20 11:30:23.716195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.216 [2024-11-20 11:30:23.716350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.216 [2024-11-20 11:30:23.716357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.216 [2024-11-20 11:30:23.716363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.216 [2024-11-20 11:30:23.716369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.216 [2024-11-20 11:30:23.728288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.216 [2024-11-20 11:30:23.728846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.216 [2024-11-20 11:30:23.728877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.216 [2024-11-20 11:30:23.728886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.216 [2024-11-20 11:30:23.729052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.216 [2024-11-20 11:30:23.729213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.216 [2024-11-20 11:30:23.729225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.216 [2024-11-20 11:30:23.729231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.216 [2024-11-20 11:30:23.729237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.216 [2024-11-20 11:30:23.741021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.216 [2024-11-20 11:30:23.741607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.216 [2024-11-20 11:30:23.741639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.216 [2024-11-20 11:30:23.741648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.216 [2024-11-20 11:30:23.741815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.216 [2024-11-20 11:30:23.741969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.216 [2024-11-20 11:30:23.741976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.216 [2024-11-20 11:30:23.741982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.216 [2024-11-20 11:30:23.741988] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.216 [2024-11-20 11:30:23.753766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.216 [2024-11-20 11:30:23.754243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.216 [2024-11-20 11:30:23.754273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.216 [2024-11-20 11:30:23.754281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.216 [2024-11-20 11:30:23.754449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.216 [2024-11-20 11:30:23.754604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.216 [2024-11-20 11:30:23.754611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.216 [2024-11-20 11:30:23.754617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.216 [2024-11-20 11:30:23.754623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.216 [2024-11-20 11:30:23.766403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.216 [2024-11-20 11:30:23.766871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.216 [2024-11-20 11:30:23.766903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.216 [2024-11-20 11:30:23.766911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.216 [2024-11-20 11:30:23.767079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.216 [2024-11-20 11:30:23.767241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.216 [2024-11-20 11:30:23.767249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.216 [2024-11-20 11:30:23.767255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.216 [2024-11-20 11:30:23.767264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.216 [2024-11-20 11:30:23.779039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.216 [2024-11-20 11:30:23.779601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.216 [2024-11-20 11:30:23.779632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.216 [2024-11-20 11:30:23.779641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.216 [2024-11-20 11:30:23.779808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.216 [2024-11-20 11:30:23.779962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.216 [2024-11-20 11:30:23.779970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.216 [2024-11-20 11:30:23.779975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.216 [2024-11-20 11:30:23.779981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.216 [2024-11-20 11:30:23.791753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.216 [2024-11-20 11:30:23.792296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.216 [2024-11-20 11:30:23.792328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.216 [2024-11-20 11:30:23.792336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.216 [2024-11-20 11:30:23.792504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.216 [2024-11-20 11:30:23.792659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.216 [2024-11-20 11:30:23.792666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.216 [2024-11-20 11:30:23.792671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.216 [2024-11-20 11:30:23.792677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.216 [2024-11-20 11:30:23.804458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.216 [2024-11-20 11:30:23.804954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.216 [2024-11-20 11:30:23.804970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.216 [2024-11-20 11:30:23.804975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.216 [2024-11-20 11:30:23.805126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.216 [2024-11-20 11:30:23.805282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.217 [2024-11-20 11:30:23.805290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.217 [2024-11-20 11:30:23.805295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.217 [2024-11-20 11:30:23.805300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.217 [2024-11-20 11:30:23.817081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.217 [2024-11-20 11:30:23.817644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.217 [2024-11-20 11:30:23.817676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.217 [2024-11-20 11:30:23.817684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.217 [2024-11-20 11:30:23.817851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.217 [2024-11-20 11:30:23.818006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.217 [2024-11-20 11:30:23.818013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.217 [2024-11-20 11:30:23.818019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.217 [2024-11-20 11:30:23.818025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.217 [2024-11-20 11:30:23.829798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.217 [2024-11-20 11:30:23.830381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.217 [2024-11-20 11:30:23.830413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.217 [2024-11-20 11:30:23.830422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.217 [2024-11-20 11:30:23.830588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.217 [2024-11-20 11:30:23.830743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.217 [2024-11-20 11:30:23.830750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.217 [2024-11-20 11:30:23.830755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.217 [2024-11-20 11:30:23.830761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.217 [2024-11-20 11:30:23.842549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.217 [2024-11-20 11:30:23.843152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.217 [2024-11-20 11:30:23.843189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.217 [2024-11-20 11:30:23.843197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.217 [2024-11-20 11:30:23.843363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.217 [2024-11-20 11:30:23.843517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.217 [2024-11-20 11:30:23.843524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.217 [2024-11-20 11:30:23.843530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.217 [2024-11-20 11:30:23.843536] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.217 [2024-11-20 11:30:23.855310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.217 [2024-11-20 11:30:23.855807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.217 [2024-11-20 11:30:23.855822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.217 [2024-11-20 11:30:23.855828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.217 [2024-11-20 11:30:23.855983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.217 [2024-11-20 11:30:23.856135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.217 [2024-11-20 11:30:23.856141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.217 [2024-11-20 11:30:23.856147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.217 [2024-11-20 11:30:23.856152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.217 [2024-11-20 11:30:23.867920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.217 [2024-11-20 11:30:23.868503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.217 [2024-11-20 11:30:23.868535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.217 [2024-11-20 11:30:23.868545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.217 [2024-11-20 11:30:23.868714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.217 [2024-11-20 11:30:23.868869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.217 [2024-11-20 11:30:23.868876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.217 [2024-11-20 11:30:23.868882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.217 [2024-11-20 11:30:23.868889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.217 [2024-11-20 11:30:23.880667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.217 [2024-11-20 11:30:23.881138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.217 [2024-11-20 11:30:23.881175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.217 [2024-11-20 11:30:23.881185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.217 [2024-11-20 11:30:23.881353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.217 [2024-11-20 11:30:23.881508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.217 [2024-11-20 11:30:23.881515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.217 [2024-11-20 11:30:23.881521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.217 [2024-11-20 11:30:23.881526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.217 [2024-11-20 11:30:23.893302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.217 [2024-11-20 11:30:23.893915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.217 [2024-11-20 11:30:23.893947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.217 [2024-11-20 11:30:23.893956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.217 [2024-11-20 11:30:23.894122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.217 [2024-11-20 11:30:23.894281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.217 [2024-11-20 11:30:23.894292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.217 [2024-11-20 11:30:23.894298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.217 [2024-11-20 11:30:23.894304] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.217 [2024-11-20 11:30:23.905932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.217 [2024-11-20 11:30:23.906448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.217 [2024-11-20 11:30:23.906465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.217 [2024-11-20 11:30:23.906472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.217 [2024-11-20 11:30:23.906623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.217 [2024-11-20 11:30:23.906775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.217 [2024-11-20 11:30:23.906782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.217 [2024-11-20 11:30:23.906787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.217 [2024-11-20 11:30:23.906792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.217 [2024-11-20 11:30:23.918663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.217 [2024-11-20 11:30:23.919109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.217 [2024-11-20 11:30:23.919124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.217 [2024-11-20 11:30:23.919129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.217 [2024-11-20 11:30:23.919286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.217 [2024-11-20 11:30:23.919438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.217 [2024-11-20 11:30:23.919445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.217 [2024-11-20 11:30:23.919451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.217 [2024-11-20 11:30:23.919456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.217 [2024-11-20 11:30:23.931378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.217 [2024-11-20 11:30:23.931825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.217 [2024-11-20 11:30:23.931839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.217 [2024-11-20 11:30:23.931844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.217 [2024-11-20 11:30:23.931994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.217 [2024-11-20 11:30:23.932148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.218 [2024-11-20 11:30:23.932156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.218 [2024-11-20 11:30:23.932167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.218 [2024-11-20 11:30:23.932175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.218 [2024-11-20 11:30:23.944104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.218 [2024-11-20 11:30:23.944676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.218 [2024-11-20 11:30:23.944708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.218 [2024-11-20 11:30:23.944717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.218 [2024-11-20 11:30:23.944886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.218 [2024-11-20 11:30:23.945040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.218 [2024-11-20 11:30:23.945047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.218 [2024-11-20 11:30:23.945053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.218 [2024-11-20 11:30:23.945059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.480 [2024-11-20 11:30:23.956847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.480 [2024-11-20 11:30:23.957322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.480 [2024-11-20 11:30:23.957339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.480 [2024-11-20 11:30:23.957345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.480 [2024-11-20 11:30:23.957497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.480 [2024-11-20 11:30:23.957649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.480 [2024-11-20 11:30:23.957656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.480 [2024-11-20 11:30:23.957662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.480 [2024-11-20 11:30:23.957667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.480 [2024-11-20 11:30:23.969587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.480 [2024-11-20 11:30:23.969833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.480 [2024-11-20 11:30:23.969847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.480 [2024-11-20 11:30:23.969853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.480 [2024-11-20 11:30:23.970005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.480 [2024-11-20 11:30:23.970156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.480 [2024-11-20 11:30:23.970169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.480 [2024-11-20 11:30:23.970174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.480 [2024-11-20 11:30:23.970179] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.480 [2024-11-20 11:30:23.982241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.480 [2024-11-20 11:30:23.982733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.480 [2024-11-20 11:30:23.982747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.480 [2024-11-20 11:30:23.982752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.480 [2024-11-20 11:30:23.982903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.480 [2024-11-20 11:30:23.983054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.480 [2024-11-20 11:30:23.983061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.480 [2024-11-20 11:30:23.983066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.480 [2024-11-20 11:30:23.983071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.480 [2024-11-20 11:30:23.994989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.480 [2024-11-20 11:30:23.995561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.480 [2024-11-20 11:30:23.995592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.480 [2024-11-20 11:30:23.995601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.480 [2024-11-20 11:30:23.995768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.480 [2024-11-20 11:30:23.995922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.480 [2024-11-20 11:30:23.995929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.480 [2024-11-20 11:30:23.995935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.480 [2024-11-20 11:30:23.995941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.480 [2024-11-20 11:30:24.007713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.480 [2024-11-20 11:30:24.008269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.480 [2024-11-20 11:30:24.008300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.480 [2024-11-20 11:30:24.008309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.480 [2024-11-20 11:30:24.008478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.480 [2024-11-20 11:30:24.008632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.480 [2024-11-20 11:30:24.008639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.480 [2024-11-20 11:30:24.008645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.480 [2024-11-20 11:30:24.008651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.480 [2024-11-20 11:30:24.020441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.480 [2024-11-20 11:30:24.020975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.480 [2024-11-20 11:30:24.021006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.480 [2024-11-20 11:30:24.021015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.480 [2024-11-20 11:30:24.021190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.480 [2024-11-20 11:30:24.021345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.481 [2024-11-20 11:30:24.021352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.481 [2024-11-20 11:30:24.021358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.481 [2024-11-20 11:30:24.021364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.481 [2024-11-20 11:30:24.033129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.481 [2024-11-20 11:30:24.033636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.481 [2024-11-20 11:30:24.033651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.481 [2024-11-20 11:30:24.033658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.481 [2024-11-20 11:30:24.033809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.481 [2024-11-20 11:30:24.033960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.481 [2024-11-20 11:30:24.033967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.481 [2024-11-20 11:30:24.033973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.481 [2024-11-20 11:30:24.033978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.481 [2024-11-20 11:30:24.045747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.481 [2024-11-20 11:30:24.046207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.481 [2024-11-20 11:30:24.046221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.481 [2024-11-20 11:30:24.046227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.481 [2024-11-20 11:30:24.046378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.481 [2024-11-20 11:30:24.046530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.481 [2024-11-20 11:30:24.046537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.481 [2024-11-20 11:30:24.046542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.481 [2024-11-20 11:30:24.046547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.481 [2024-11-20 11:30:24.058451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.481 [2024-11-20 11:30:24.058931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.481 [2024-11-20 11:30:24.058945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.481 [2024-11-20 11:30:24.058951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.481 [2024-11-20 11:30:24.059101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.481 [2024-11-20 11:30:24.059258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.481 [2024-11-20 11:30:24.059268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.481 [2024-11-20 11:30:24.059274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.481 [2024-11-20 11:30:24.059280] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.481 [2024-11-20 11:30:24.071180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.481 [2024-11-20 11:30:24.071740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.481 [2024-11-20 11:30:24.071772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.481 [2024-11-20 11:30:24.071781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.481 [2024-11-20 11:30:24.071947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.481 [2024-11-20 11:30:24.072102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.481 [2024-11-20 11:30:24.072109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.481 [2024-11-20 11:30:24.072114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.481 [2024-11-20 11:30:24.072120] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.481 [2024-11-20 11:30:24.083901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.481 [2024-11-20 11:30:24.084410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.481 [2024-11-20 11:30:24.084442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.481 [2024-11-20 11:30:24.084451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.481 [2024-11-20 11:30:24.084619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.481 [2024-11-20 11:30:24.084773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.481 [2024-11-20 11:30:24.084780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.481 [2024-11-20 11:30:24.084786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.481 [2024-11-20 11:30:24.084793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.481 [2024-11-20 11:30:24.096560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.481 [2024-11-20 11:30:24.097018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.481 [2024-11-20 11:30:24.097034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.481 [2024-11-20 11:30:24.097040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.481 [2024-11-20 11:30:24.097196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.481 [2024-11-20 11:30:24.097348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.481 [2024-11-20 11:30:24.097355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.481 [2024-11-20 11:30:24.097360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.481 [2024-11-20 11:30:24.097369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.481 [2024-11-20 11:30:24.109279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.481 [2024-11-20 11:30:24.109762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.481 [2024-11-20 11:30:24.109776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.481 [2024-11-20 11:30:24.109782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.481 [2024-11-20 11:30:24.109932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.481 [2024-11-20 11:30:24.110084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.481 [2024-11-20 11:30:24.110091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.481 [2024-11-20 11:30:24.110096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.481 [2024-11-20 11:30:24.110101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.481 [2024-11-20 11:30:24.122013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.481 [2024-11-20 11:30:24.122566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.481 [2024-11-20 11:30:24.122598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.481 [2024-11-20 11:30:24.122606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.481 [2024-11-20 11:30:24.122773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.481 [2024-11-20 11:30:24.122927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.481 [2024-11-20 11:30:24.122934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.481 [2024-11-20 11:30:24.122940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.481 [2024-11-20 11:30:24.122946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.481 [2024-11-20 11:30:24.134715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.481 [2024-11-20 11:30:24.135074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.481 [2024-11-20 11:30:24.135090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.481 [2024-11-20 11:30:24.135095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.481 [2024-11-20 11:30:24.135251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.481 [2024-11-20 11:30:24.135404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.481 [2024-11-20 11:30:24.135410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.481 [2024-11-20 11:30:24.135415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.481 [2024-11-20 11:30:24.135421] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.481 [2024-11-20 11:30:24.147344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.481 [2024-11-20 11:30:24.147823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.481 [2024-11-20 11:30:24.147858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.481 [2024-11-20 11:30:24.147866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.481 [2024-11-20 11:30:24.148033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.482 [2024-11-20 11:30:24.148193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.482 [2024-11-20 11:30:24.148201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.482 [2024-11-20 11:30:24.148208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.482 [2024-11-20 11:30:24.148214] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.482 [2024-11-20 11:30:24.159986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.482 [2024-11-20 11:30:24.160473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.482 [2024-11-20 11:30:24.160505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.482 [2024-11-20 11:30:24.160513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.482 [2024-11-20 11:30:24.160680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.482 [2024-11-20 11:30:24.160834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.482 [2024-11-20 11:30:24.160841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.482 [2024-11-20 11:30:24.160847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.482 [2024-11-20 11:30:24.160853] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.482 [2024-11-20 11:30:24.172634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.482 [2024-11-20 11:30:24.173109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.482 [2024-11-20 11:30:24.173139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.482 [2024-11-20 11:30:24.173148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.482 [2024-11-20 11:30:24.173321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.482 [2024-11-20 11:30:24.173476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.482 [2024-11-20 11:30:24.173483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.482 [2024-11-20 11:30:24.173489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.482 [2024-11-20 11:30:24.173495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.482 [2024-11-20 11:30:24.185265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.482 [2024-11-20 11:30:24.185770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.482 [2024-11-20 11:30:24.185785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.482 [2024-11-20 11:30:24.185791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.482 [2024-11-20 11:30:24.185949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.482 [2024-11-20 11:30:24.186101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.482 [2024-11-20 11:30:24.186108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.482 [2024-11-20 11:30:24.186114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.482 [2024-11-20 11:30:24.186119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.482 [2024-11-20 11:30:24.197884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.482 [2024-11-20 11:30:24.198359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.482 [2024-11-20 11:30:24.198373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.482 [2024-11-20 11:30:24.198379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.482 [2024-11-20 11:30:24.198529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.482 [2024-11-20 11:30:24.198681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.482 [2024-11-20 11:30:24.198688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.482 [2024-11-20 11:30:24.198694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.482 [2024-11-20 11:30:24.198698] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.482 [2024-11-20 11:30:24.210614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.482 [2024-11-20 11:30:24.211061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.482 [2024-11-20 11:30:24.211074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.482 [2024-11-20 11:30:24.211080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.482 [2024-11-20 11:30:24.211235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.482 [2024-11-20 11:30:24.211387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.482 [2024-11-20 11:30:24.211394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.482 [2024-11-20 11:30:24.211399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.482 [2024-11-20 11:30:24.211404] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.749 [2024-11-20 11:30:24.223317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.749 [2024-11-20 11:30:24.223901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-11-20 11:30:24.223933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.749 [2024-11-20 11:30:24.223942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.749 [2024-11-20 11:30:24.224110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.749 [2024-11-20 11:30:24.224271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.749 [2024-11-20 11:30:24.224283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.749 [2024-11-20 11:30:24.224289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.749 [2024-11-20 11:30:24.224296] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.749 [2024-11-20 11:30:24.236063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.749 [2024-11-20 11:30:24.236648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-11-20 11:30:24.236679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.749 [2024-11-20 11:30:24.236688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.749 [2024-11-20 11:30:24.236854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.749 [2024-11-20 11:30:24.237010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.749 [2024-11-20 11:30:24.237016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.749 [2024-11-20 11:30:24.237022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.749 [2024-11-20 11:30:24.237028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.749 [2024-11-20 11:30:24.248805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.749 [2024-11-20 11:30:24.249423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-11-20 11:30:24.249454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.749 [2024-11-20 11:30:24.249463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.749 [2024-11-20 11:30:24.249631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.749 [2024-11-20 11:30:24.249785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.749 [2024-11-20 11:30:24.249792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.749 [2024-11-20 11:30:24.249798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.749 [2024-11-20 11:30:24.249804] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.749 [2024-11-20 11:30:24.261433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.749 [2024-11-20 11:30:24.262039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-11-20 11:30:24.262071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.749 [2024-11-20 11:30:24.262080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.749 [2024-11-20 11:30:24.262253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.749 [2024-11-20 11:30:24.262408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.749 [2024-11-20 11:30:24.262415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.749 [2024-11-20 11:30:24.262421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.749 [2024-11-20 11:30:24.262427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.749 [2024-11-20 11:30:24.274054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.749 [2024-11-20 11:30:24.274650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-11-20 11:30:24.274682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.749 [2024-11-20 11:30:24.274690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.749 [2024-11-20 11:30:24.274857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.749 [2024-11-20 11:30:24.275012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.749 [2024-11-20 11:30:24.275019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.749 [2024-11-20 11:30:24.275025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.749 [2024-11-20 11:30:24.275030] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.749 [2024-11-20 11:30:24.286804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.749 [2024-11-20 11:30:24.287168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-11-20 11:30:24.287185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.749 [2024-11-20 11:30:24.287192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.749 [2024-11-20 11:30:24.287343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.749 [2024-11-20 11:30:24.287496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.749 [2024-11-20 11:30:24.287502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.749 [2024-11-20 11:30:24.287508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.749 [2024-11-20 11:30:24.287513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.749 [2024-11-20 11:30:24.299417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.749 [2024-11-20 11:30:24.299926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-11-20 11:30:24.299958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.749 [2024-11-20 11:30:24.299966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.749 [2024-11-20 11:30:24.300133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.749 [2024-11-20 11:30:24.300292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.749 [2024-11-20 11:30:24.300300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.749 [2024-11-20 11:30:24.300307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.749 [2024-11-20 11:30:24.300314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.749 [2024-11-20 11:30:24.312089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.749 [2024-11-20 11:30:24.312574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-11-20 11:30:24.312609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.749 [2024-11-20 11:30:24.312618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.749 [2024-11-20 11:30:24.312785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.749 [2024-11-20 11:30:24.312940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.749 [2024-11-20 11:30:24.312946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.749 [2024-11-20 11:30:24.312952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.749 [2024-11-20 11:30:24.312958] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.750 [2024-11-20 11:30:24.324728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.750 [2024-11-20 11:30:24.325380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-11-20 11:30:24.325411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.750 [2024-11-20 11:30:24.325420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.750 [2024-11-20 11:30:24.325586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.750 [2024-11-20 11:30:24.325740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.750 [2024-11-20 11:30:24.325747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.750 [2024-11-20 11:30:24.325753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.750 [2024-11-20 11:30:24.325759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.750 [2024-11-20 11:30:24.337390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.750 [2024-11-20 11:30:24.337922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-11-20 11:30:24.337953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.750 [2024-11-20 11:30:24.337962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.750 [2024-11-20 11:30:24.338128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.750 [2024-11-20 11:30:24.338288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.750 [2024-11-20 11:30:24.338296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.750 [2024-11-20 11:30:24.338303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.750 [2024-11-20 11:30:24.338308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.750 5103.80 IOPS, 19.94 MiB/s [2024-11-20T10:30:24.492Z] [2024-11-20 11:30:24.350071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.750 [2024-11-20 11:30:24.350598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-11-20 11:30:24.350613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.750 [2024-11-20 11:30:24.350619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.750 [2024-11-20 11:30:24.350775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.750 [2024-11-20 11:30:24.350927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.750 [2024-11-20 11:30:24.350934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.750 [2024-11-20 11:30:24.350940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.750 [2024-11-20 11:30:24.350944] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.750 [2024-11-20 11:30:24.362705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.750 [2024-11-20 11:30:24.363270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-11-20 11:30:24.363302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.750 [2024-11-20 11:30:24.363311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.750 [2024-11-20 11:30:24.363480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.750 [2024-11-20 11:30:24.363634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.750 [2024-11-20 11:30:24.363641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.750 [2024-11-20 11:30:24.363647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.750 [2024-11-20 11:30:24.363653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.750 [2024-11-20 11:30:24.375425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.750 [2024-11-20 11:30:24.376000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-11-20 11:30:24.376031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.750 [2024-11-20 11:30:24.376040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.750 [2024-11-20 11:30:24.376214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.750 [2024-11-20 11:30:24.376369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.750 [2024-11-20 11:30:24.376376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.750 [2024-11-20 11:30:24.376382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.750 [2024-11-20 11:30:24.376388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.750 [2024-11-20 11:30:24.388146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.750 [2024-11-20 11:30:24.388725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-11-20 11:30:24.388757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.750 [2024-11-20 11:30:24.388766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.750 [2024-11-20 11:30:24.388932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.750 [2024-11-20 11:30:24.389087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.750 [2024-11-20 11:30:24.389094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.750 [2024-11-20 11:30:24.389104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.750 [2024-11-20 11:30:24.389110] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.750 [2024-11-20 11:30:24.400881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.750 [2024-11-20 11:30:24.401493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-11-20 11:30:24.401524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.750 [2024-11-20 11:30:24.401533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.750 [2024-11-20 11:30:24.401700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.750 [2024-11-20 11:30:24.401854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.750 [2024-11-20 11:30:24.401861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.750 [2024-11-20 11:30:24.401867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.750 [2024-11-20 11:30:24.401874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.750 [2024-11-20 11:30:24.413503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.750 [2024-11-20 11:30:24.413993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-11-20 11:30:24.414008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.750 [2024-11-20 11:30:24.414014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.750 [2024-11-20 11:30:24.414173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.750 [2024-11-20 11:30:24.414326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.750 [2024-11-20 11:30:24.414333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.750 [2024-11-20 11:30:24.414338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.750 [2024-11-20 11:30:24.414344] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.750 [2024-11-20 11:30:24.426241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.750 [2024-11-20 11:30:24.426806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-11-20 11:30:24.426837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.750 [2024-11-20 11:30:24.426846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.750 [2024-11-20 11:30:24.427012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.750 [2024-11-20 11:30:24.427174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.750 [2024-11-20 11:30:24.427182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.750 [2024-11-20 11:30:24.427188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.750 [2024-11-20 11:30:24.427194] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.750 [2024-11-20 11:30:24.438960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.750 [2024-11-20 11:30:24.439515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-11-20 11:30:24.439546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.750 [2024-11-20 11:30:24.439555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.750 [2024-11-20 11:30:24.439721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.750 [2024-11-20 11:30:24.439876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.750 [2024-11-20 11:30:24.439883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.750 [2024-11-20 11:30:24.439888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.750 [2024-11-20 11:30:24.439894] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.750 [2024-11-20 11:30:24.451668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.750 [2024-11-20 11:30:24.452267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-11-20 11:30:24.452298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.750 [2024-11-20 11:30:24.452307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.750 [2024-11-20 11:30:24.452476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.750 [2024-11-20 11:30:24.452631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.750 [2024-11-20 11:30:24.452638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.750 [2024-11-20 11:30:24.452644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.750 [2024-11-20 11:30:24.452651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.750 [2024-11-20 11:30:24.464286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.750 [2024-11-20 11:30:24.464771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-11-20 11:30:24.464802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.750 [2024-11-20 11:30:24.464811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.750 [2024-11-20 11:30:24.464979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.750 [2024-11-20 11:30:24.465134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.750 [2024-11-20 11:30:24.465141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.750 [2024-11-20 11:30:24.465147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.750 [2024-11-20 11:30:24.465153] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.750 [2024-11-20 11:30:24.476919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.750 [2024-11-20 11:30:24.477482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-11-20 11:30:24.477517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:31.750 [2024-11-20 11:30:24.477526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:31.750 [2024-11-20 11:30:24.477692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:31.750 [2024-11-20 11:30:24.477846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.750 [2024-11-20 11:30:24.477853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.750 [2024-11-20 11:30:24.477859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.750 [2024-11-20 11:30:24.477866] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.013 [2024-11-20 11:30:24.489636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.013 [2024-11-20 11:30:24.490133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.013 [2024-11-20 11:30:24.490149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.013 [2024-11-20 11:30:24.490155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.013 [2024-11-20 11:30:24.490312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.013 [2024-11-20 11:30:24.490464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.013 [2024-11-20 11:30:24.490471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.013 [2024-11-20 11:30:24.490476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.013 [2024-11-20 11:30:24.490482] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.013 [2024-11-20 11:30:24.502382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.013 [2024-11-20 11:30:24.502835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.013 [2024-11-20 11:30:24.502848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.013 [2024-11-20 11:30:24.502854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.013 [2024-11-20 11:30:24.503005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.013 [2024-11-20 11:30:24.503157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.013 [2024-11-20 11:30:24.503169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.013 [2024-11-20 11:30:24.503174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.013 [2024-11-20 11:30:24.503179] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.013 [2024-11-20 11:30:24.515098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.013 [2024-11-20 11:30:24.515691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.013 [2024-11-20 11:30:24.515722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.013 [2024-11-20 11:30:24.515731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.013 [2024-11-20 11:30:24.515898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.013 [2024-11-20 11:30:24.516056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.013 [2024-11-20 11:30:24.516063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.013 [2024-11-20 11:30:24.516069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.013 [2024-11-20 11:30:24.516075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.013 [2024-11-20 11:30:24.527851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.013 [2024-11-20 11:30:24.528321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.013 [2024-11-20 11:30:24.528337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.013 [2024-11-20 11:30:24.528343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.013 [2024-11-20 11:30:24.528495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.013 [2024-11-20 11:30:24.528647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.013 [2024-11-20 11:30:24.528654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.013 [2024-11-20 11:30:24.528659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.013 [2024-11-20 11:30:24.528664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.013 [2024-11-20 11:30:24.540579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.013 [2024-11-20 11:30:24.541022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.013 [2024-11-20 11:30:24.541035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.013 [2024-11-20 11:30:24.541041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.014 [2024-11-20 11:30:24.541197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.014 [2024-11-20 11:30:24.541349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.014 [2024-11-20 11:30:24.541355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.014 [2024-11-20 11:30:24.541361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.014 [2024-11-20 11:30:24.541366] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.014 [2024-11-20 11:30:24.553279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.014 [2024-11-20 11:30:24.553742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.014 [2024-11-20 11:30:24.553755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.014 [2024-11-20 11:30:24.553761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.014 [2024-11-20 11:30:24.553912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.014 [2024-11-20 11:30:24.554063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.014 [2024-11-20 11:30:24.554070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.014 [2024-11-20 11:30:24.554079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.014 [2024-11-20 11:30:24.554084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.014 [2024-11-20 11:30:24.565980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.014 [2024-11-20 11:30:24.566566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.014 [2024-11-20 11:30:24.566597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.014 [2024-11-20 11:30:24.566606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.014 [2024-11-20 11:30:24.566773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.014 [2024-11-20 11:30:24.566927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.014 [2024-11-20 11:30:24.566934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.014 [2024-11-20 11:30:24.566940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.014 [2024-11-20 11:30:24.566945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.014 [2024-11-20 11:30:24.578708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.014 [2024-11-20 11:30:24.579315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.014 [2024-11-20 11:30:24.579346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.014 [2024-11-20 11:30:24.579355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.014 [2024-11-20 11:30:24.579522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.014 [2024-11-20 11:30:24.579676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.014 [2024-11-20 11:30:24.579683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.014 [2024-11-20 11:30:24.579689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.014 [2024-11-20 11:30:24.579695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.014 [2024-11-20 11:30:24.591459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.014 [2024-11-20 11:30:24.592055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.014 [2024-11-20 11:30:24.592087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.014 [2024-11-20 11:30:24.592096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.014 [2024-11-20 11:30:24.592270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.014 [2024-11-20 11:30:24.592425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.014 [2024-11-20 11:30:24.592432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.014 [2024-11-20 11:30:24.592438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.014 [2024-11-20 11:30:24.592444] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.014 [2024-11-20 11:30:24.604090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.014 [2024-11-20 11:30:24.604697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.014 [2024-11-20 11:30:24.604728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.014 [2024-11-20 11:30:24.604737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.014 [2024-11-20 11:30:24.604903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.014 [2024-11-20 11:30:24.605057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.014 [2024-11-20 11:30:24.605065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.014 [2024-11-20 11:30:24.605071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.014 [2024-11-20 11:30:24.605076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.014 [2024-11-20 11:30:24.616708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.014 [2024-11-20 11:30:24.617313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.014 [2024-11-20 11:30:24.617344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.014 [2024-11-20 11:30:24.617352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.014 [2024-11-20 11:30:24.617519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.014 [2024-11-20 11:30:24.617673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.014 [2024-11-20 11:30:24.617680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.014 [2024-11-20 11:30:24.617686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.014 [2024-11-20 11:30:24.617692] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.014 [2024-11-20 11:30:24.629461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.014 [2024-11-20 11:30:24.629964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.014 [2024-11-20 11:30:24.629979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.014 [2024-11-20 11:30:24.629985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.014 [2024-11-20 11:30:24.630136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.014 [2024-11-20 11:30:24.630294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.014 [2024-11-20 11:30:24.630301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.014 [2024-11-20 11:30:24.630306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.014 [2024-11-20 11:30:24.630312] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.014 [2024-11-20 11:30:24.642209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.014 [2024-11-20 11:30:24.642669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.014 [2024-11-20 11:30:24.642683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.014 [2024-11-20 11:30:24.642692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.014 [2024-11-20 11:30:24.642843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.014 [2024-11-20 11:30:24.642994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.014 [2024-11-20 11:30:24.643001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.014 [2024-11-20 11:30:24.643006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.014 [2024-11-20 11:30:24.643011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.014 [2024-11-20 11:30:24.654928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.014 [2024-11-20 11:30:24.655294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.014 [2024-11-20 11:30:24.655308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.014 [2024-11-20 11:30:24.655314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.014 [2024-11-20 11:30:24.655465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.014 [2024-11-20 11:30:24.655616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.014 [2024-11-20 11:30:24.655623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.014 [2024-11-20 11:30:24.655628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.014 [2024-11-20 11:30:24.655633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.014 [2024-11-20 11:30:24.667622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.014 [2024-11-20 11:30:24.668095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.014 [2024-11-20 11:30:24.668109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.015 [2024-11-20 11:30:24.668114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.015 [2024-11-20 11:30:24.668271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.015 [2024-11-20 11:30:24.668423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.015 [2024-11-20 11:30:24.668430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.015 [2024-11-20 11:30:24.668435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.015 [2024-11-20 11:30:24.668440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.015 [2024-11-20 11:30:24.680329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.015 [2024-11-20 11:30:24.680919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.015 [2024-11-20 11:30:24.680950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.015 [2024-11-20 11:30:24.680959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.015 [2024-11-20 11:30:24.681126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.015 [2024-11-20 11:30:24.681293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.015 [2024-11-20 11:30:24.681301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.015 [2024-11-20 11:30:24.681307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.015 [2024-11-20 11:30:24.681313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.015 [2024-11-20 11:30:24.693069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.015 [2024-11-20 11:30:24.693575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.015 [2024-11-20 11:30:24.693592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.015 [2024-11-20 11:30:24.693598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.015 [2024-11-20 11:30:24.693749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.015 [2024-11-20 11:30:24.693901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.015 [2024-11-20 11:30:24.693908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.015 [2024-11-20 11:30:24.693913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.015 [2024-11-20 11:30:24.693919] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.015 [2024-11-20 11:30:24.705678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.015 [2024-11-20 11:30:24.706118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.015 [2024-11-20 11:30:24.706131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.015 [2024-11-20 11:30:24.706137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.015 [2024-11-20 11:30:24.706293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.015 [2024-11-20 11:30:24.706445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.015 [2024-11-20 11:30:24.706451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.015 [2024-11-20 11:30:24.706457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.015 [2024-11-20 11:30:24.706461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.015 [2024-11-20 11:30:24.718360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.015 [2024-11-20 11:30:24.718812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.015 [2024-11-20 11:30:24.718826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.015 [2024-11-20 11:30:24.718831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.015 [2024-11-20 11:30:24.718982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.015 [2024-11-20 11:30:24.719133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.015 [2024-11-20 11:30:24.719140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.015 [2024-11-20 11:30:24.719149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.015 [2024-11-20 11:30:24.719154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.015 [2024-11-20 11:30:24.731051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.015 [2024-11-20 11:30:24.731393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.015 [2024-11-20 11:30:24.731408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.015 [2024-11-20 11:30:24.731414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.015 [2024-11-20 11:30:24.731565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.015 [2024-11-20 11:30:24.731717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.015 [2024-11-20 11:30:24.731723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.015 [2024-11-20 11:30:24.731729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.015 [2024-11-20 11:30:24.731735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.015 [2024-11-20 11:30:24.743784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.015 [2024-11-20 11:30:24.744374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.015 [2024-11-20 11:30:24.744406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.015 [2024-11-20 11:30:24.744415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.015 [2024-11-20 11:30:24.744581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.015 [2024-11-20 11:30:24.744735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.015 [2024-11-20 11:30:24.744742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.015 [2024-11-20 11:30:24.744748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.015 [2024-11-20 11:30:24.744754] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.279 [2024-11-20 11:30:24.756532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.279 [2024-11-20 11:30:24.757136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.279 [2024-11-20 11:30:24.757174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.279 [2024-11-20 11:30:24.757182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.279 [2024-11-20 11:30:24.757349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.279 [2024-11-20 11:30:24.757503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.279 [2024-11-20 11:30:24.757510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.279 [2024-11-20 11:30:24.757516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.279 [2024-11-20 11:30:24.757522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.279 [2024-11-20 11:30:24.769143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.279 [2024-11-20 11:30:24.769734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.279 [2024-11-20 11:30:24.769766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.279 [2024-11-20 11:30:24.769775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.279 [2024-11-20 11:30:24.769942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.279 [2024-11-20 11:30:24.770096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.279 [2024-11-20 11:30:24.770103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.279 [2024-11-20 11:30:24.770109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.279 [2024-11-20 11:30:24.770115] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.279 [2024-11-20 11:30:24.781900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.279 [2024-11-20 11:30:24.782463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.279 [2024-11-20 11:30:24.782494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.279 [2024-11-20 11:30:24.782503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.279 [2024-11-20 11:30:24.782671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.279 [2024-11-20 11:30:24.782825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.279 [2024-11-20 11:30:24.782832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.279 [2024-11-20 11:30:24.782839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.279 [2024-11-20 11:30:24.782845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.279 [2024-11-20 11:30:24.794609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.279 [2024-11-20 11:30:24.795164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.279 [2024-11-20 11:30:24.795195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.279 [2024-11-20 11:30:24.795203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.279 [2024-11-20 11:30:24.795370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.279 [2024-11-20 11:30:24.795524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.279 [2024-11-20 11:30:24.795530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.279 [2024-11-20 11:30:24.795536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.279 [2024-11-20 11:30:24.795542] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.279 [2024-11-20 11:30:24.807307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.279 [2024-11-20 11:30:24.807901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.279 [2024-11-20 11:30:24.807932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.279 [2024-11-20 11:30:24.807948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.279 [2024-11-20 11:30:24.808115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.279 [2024-11-20 11:30:24.808277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.279 [2024-11-20 11:30:24.808285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.279 [2024-11-20 11:30:24.808291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.279 [2024-11-20 11:30:24.808297] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.279 [2024-11-20 11:30:24.819920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.279 [2024-11-20 11:30:24.820536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.279 [2024-11-20 11:30:24.820567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.279 [2024-11-20 11:30:24.820576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.279 [2024-11-20 11:30:24.820743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.279 [2024-11-20 11:30:24.820897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.279 [2024-11-20 11:30:24.820904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.279 [2024-11-20 11:30:24.820910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.279 [2024-11-20 11:30:24.820915] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.279 [2024-11-20 11:30:24.832540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.279 [2024-11-20 11:30:24.833139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.279 [2024-11-20 11:30:24.833176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.279 [2024-11-20 11:30:24.833185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.279 [2024-11-20 11:30:24.833353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.279 [2024-11-20 11:30:24.833508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.279 [2024-11-20 11:30:24.833515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.279 [2024-11-20 11:30:24.833521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.279 [2024-11-20 11:30:24.833527] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.279 [2024-11-20 11:30:24.845295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2917414 Killed "${NVMF_APP[@]}" "$@" 00:29:32.279 [2024-11-20 11:30:24.845676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.279 [2024-11-20 11:30:24.845708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.279 [2024-11-20 11:30:24.845716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.279 [2024-11-20 11:30:24.845886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.280 [2024-11-20 11:30:24.846041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.280 [2024-11-20 11:30:24.846048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.280 [2024-11-20 11:30:24.846055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.280 [2024-11-20 11:30:24.846061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.280 11:30:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:32.280 11:30:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:32.280 11:30:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:32.280 11:30:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:32.280 11:30:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:32.280 11:30:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2919115 00:29:32.280 11:30:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2919115 00:29:32.280 11:30:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:32.280 11:30:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2919115 ']' 00:29:32.280 11:30:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:32.280 11:30:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:32.280 11:30:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:32.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:32.280 11:30:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:32.280 11:30:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:32.280 [2024-11-20 11:30:24.857972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.280 [2024-11-20 11:30:24.858542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.280 [2024-11-20 11:30:24.858574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.280 [2024-11-20 11:30:24.858582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.280 [2024-11-20 11:30:24.858749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.280 [2024-11-20 11:30:24.858903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.280 [2024-11-20 11:30:24.858910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.280 [2024-11-20 11:30:24.858916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.280 [2024-11-20 11:30:24.858922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.280 [2024-11-20 11:30:24.870703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.280 [2024-11-20 11:30:24.871167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.280 [2024-11-20 11:30:24.871183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.280 [2024-11-20 11:30:24.871189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.280 [2024-11-20 11:30:24.871344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.280 [2024-11-20 11:30:24.871496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.280 [2024-11-20 11:30:24.871502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.280 [2024-11-20 11:30:24.871508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.280 [2024-11-20 11:30:24.871513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.280 [2024-11-20 11:30:24.883434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.280 [2024-11-20 11:30:24.883883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.280 [2024-11-20 11:30:24.883896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.280 [2024-11-20 11:30:24.883902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.280 [2024-11-20 11:30:24.884053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.280 [2024-11-20 11:30:24.884211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.280 [2024-11-20 11:30:24.884218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.280 [2024-11-20 11:30:24.884224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.280 [2024-11-20 11:30:24.884229] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.280 [2024-11-20 11:30:24.896139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.280 [2024-11-20 11:30:24.896688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.280 [2024-11-20 11:30:24.896720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.280 [2024-11-20 11:30:24.896728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.280 [2024-11-20 11:30:24.896895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.280 [2024-11-20 11:30:24.897050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.280 [2024-11-20 11:30:24.897057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.280 [2024-11-20 11:30:24.897063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.280 [2024-11-20 11:30:24.897069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.280 [2024-11-20 11:30:24.907392] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:29:32.280 [2024-11-20 11:30:24.907440] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:32.280 [2024-11-20 11:30:24.908841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.280 [2024-11-20 11:30:24.909478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.280 [2024-11-20 11:30:24.909510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.280 [2024-11-20 11:30:24.909519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.280 [2024-11-20 11:30:24.909689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.280 [2024-11-20 11:30:24.909844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.280 [2024-11-20 11:30:24.909851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.280 [2024-11-20 11:30:24.909857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.280 [2024-11-20 11:30:24.909863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.280 [2024-11-20 11:30:24.921498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.280 [2024-11-20 11:30:24.922048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.280 [2024-11-20 11:30:24.922079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.280 [2024-11-20 11:30:24.922088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.280 [2024-11-20 11:30:24.922262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.280 [2024-11-20 11:30:24.922417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.280 [2024-11-20 11:30:24.922424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.280 [2024-11-20 11:30:24.922430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.280 [2024-11-20 11:30:24.922436] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.280 [2024-11-20 11:30:24.934205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.280 [2024-11-20 11:30:24.934675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.280 [2024-11-20 11:30:24.934706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.280 [2024-11-20 11:30:24.934716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.280 [2024-11-20 11:30:24.934884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.280 [2024-11-20 11:30:24.935038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.280 [2024-11-20 11:30:24.935045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.280 [2024-11-20 11:30:24.935051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.280 [2024-11-20 11:30:24.935057] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.280 [2024-11-20 11:30:24.946918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.280 [2024-11-20 11:30:24.947506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.280 [2024-11-20 11:30:24.947537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.280 [2024-11-20 11:30:24.947546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.280 [2024-11-20 11:30:24.947713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.280 [2024-11-20 11:30:24.947868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.280 [2024-11-20 11:30:24.947878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.280 [2024-11-20 11:30:24.947883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.281 [2024-11-20 11:30:24.947891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.281 [2024-11-20 11:30:24.959656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.281 [2024-11-20 11:30:24.960217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.281 [2024-11-20 11:30:24.960249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.281 [2024-11-20 11:30:24.960257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.281 [2024-11-20 11:30:24.960427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.281 [2024-11-20 11:30:24.960581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.281 [2024-11-20 11:30:24.960587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.281 [2024-11-20 11:30:24.960593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.281 [2024-11-20 11:30:24.960599] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.281 [2024-11-20 11:30:24.972373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.281 [2024-11-20 11:30:24.972968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.281 [2024-11-20 11:30:24.972999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.281 [2024-11-20 11:30:24.973009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.281 [2024-11-20 11:30:24.973184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.281 [2024-11-20 11:30:24.973339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.281 [2024-11-20 11:30:24.973347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.281 [2024-11-20 11:30:24.973354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.281 [2024-11-20 11:30:24.973360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.281 [2024-11-20 11:30:24.984983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.281 [2024-11-20 11:30:24.985483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.281 [2024-11-20 11:30:24.985499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.281 [2024-11-20 11:30:24.985505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.281 [2024-11-20 11:30:24.985656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.281 [2024-11-20 11:30:24.985808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.281 [2024-11-20 11:30:24.985815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.281 [2024-11-20 11:30:24.985821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.281 [2024-11-20 11:30:24.985829] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.281 [2024-11-20 11:30:24.997733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.281 [2024-11-20 11:30:24.998380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.281 [2024-11-20 11:30:24.998411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.281 [2024-11-20 11:30:24.998421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.281 [2024-11-20 11:30:24.998587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.281 [2024-11-20 11:30:24.998741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.281 [2024-11-20 11:30:24.998748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.281 [2024-11-20 11:30:24.998754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.281 [2024-11-20 11:30:24.998760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.281 [2024-11-20 11:30:25.000380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:32.281 [2024-11-20 11:30:25.010391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.281 [2024-11-20 11:30:25.010979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.281 [2024-11-20 11:30:25.011011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.281 [2024-11-20 11:30:25.011019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.281 [2024-11-20 11:30:25.011200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.281 [2024-11-20 11:30:25.011355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.281 [2024-11-20 11:30:25.011362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.281 [2024-11-20 11:30:25.011368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.281 [2024-11-20 11:30:25.011374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.543 [2024-11-20 11:30:25.023137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.543 [2024-11-20 11:30:25.023652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.543 [2024-11-20 11:30:25.023668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.543 [2024-11-20 11:30:25.023674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.543 [2024-11-20 11:30:25.023826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.543 [2024-11-20 11:30:25.023978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.543 [2024-11-20 11:30:25.023986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.543 [2024-11-20 11:30:25.023991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.543 [2024-11-20 11:30:25.023996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.543 [2024-11-20 11:30:25.029310] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:32.543 [2024-11-20 11:30:25.029334] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:32.543 [2024-11-20 11:30:25.029341] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:32.543 [2024-11-20 11:30:25.029346] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:32.543 [2024-11-20 11:30:25.029350] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:32.543 [2024-11-20 11:30:25.030398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:32.543 [2024-11-20 11:30:25.030551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.543 [2024-11-20 11:30:25.030553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:32.543 [2024-11-20 11:30:25.035770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.543 [2024-11-20 11:30:25.036117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.543 [2024-11-20 11:30:25.036130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.543 [2024-11-20 11:30:25.036136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.543 [2024-11-20 11:30:25.036292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.543 [2024-11-20 11:30:25.036444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.543 [2024-11-20 11:30:25.036452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.543 [2024-11-20 11:30:25.036458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.543 [2024-11-20 11:30:25.036463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.543 [2024-11-20 11:30:25.048383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.543 [2024-11-20 11:30:25.048988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.543 [2024-11-20 11:30:25.049021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.543 [2024-11-20 11:30:25.049030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.543 [2024-11-20 11:30:25.049211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.543 [2024-11-20 11:30:25.049366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.543 [2024-11-20 11:30:25.049373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.543 [2024-11-20 11:30:25.049379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.543 [2024-11-20 11:30:25.049385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.543 [2024-11-20 11:30:25.061007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.543 [2024-11-20 11:30:25.061504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.543 [2024-11-20 11:30:25.061535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.543 [2024-11-20 11:30:25.061544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.543 [2024-11-20 11:30:25.061713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.543 [2024-11-20 11:30:25.061867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.543 [2024-11-20 11:30:25.061879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.543 [2024-11-20 11:30:25.061884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.544 [2024-11-20 11:30:25.061890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.544 [2024-11-20 11:30:25.073659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.544 [2024-11-20 11:30:25.074140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.544 [2024-11-20 11:30:25.074177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.544 [2024-11-20 11:30:25.074186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.544 [2024-11-20 11:30:25.074353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.544 [2024-11-20 11:30:25.074507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.544 [2024-11-20 11:30:25.074514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.544 [2024-11-20 11:30:25.074520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.544 [2024-11-20 11:30:25.074526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.544 [2024-11-20 11:30:25.086290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.544 [2024-11-20 11:30:25.086882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.544 [2024-11-20 11:30:25.086913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.544 [2024-11-20 11:30:25.086922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.544 [2024-11-20 11:30:25.087089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.544 [2024-11-20 11:30:25.087249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.544 [2024-11-20 11:30:25.087257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.544 [2024-11-20 11:30:25.087262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.544 [2024-11-20 11:30:25.087269] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.544 [2024-11-20 11:30:25.099027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.544 [2024-11-20 11:30:25.099507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.544 [2024-11-20 11:30:25.099538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.544 [2024-11-20 11:30:25.099547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.544 [2024-11-20 11:30:25.099715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.544 [2024-11-20 11:30:25.099870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.544 [2024-11-20 11:30:25.099877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.544 [2024-11-20 11:30:25.099883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.544 [2024-11-20 11:30:25.099894] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.544 [2024-11-20 11:30:25.111672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.544 [2024-11-20 11:30:25.112214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.544 [2024-11-20 11:30:25.112236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.544 [2024-11-20 11:30:25.112242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.544 [2024-11-20 11:30:25.112399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.544 [2024-11-20 11:30:25.112551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.544 [2024-11-20 11:30:25.112557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.544 [2024-11-20 11:30:25.112564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.544 [2024-11-20 11:30:25.112570] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.544 [2024-11-20 11:30:25.124327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.544 [2024-11-20 11:30:25.124765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.544 [2024-11-20 11:30:25.124780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.544 [2024-11-20 11:30:25.124786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.544 [2024-11-20 11:30:25.124937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.544 [2024-11-20 11:30:25.125088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.544 [2024-11-20 11:30:25.125095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.544 [2024-11-20 11:30:25.125100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.544 [2024-11-20 11:30:25.125105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.544 [2024-11-20 11:30:25.136998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.544 [2024-11-20 11:30:25.137527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.544 [2024-11-20 11:30:25.137558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.544 [2024-11-20 11:30:25.137567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.544 [2024-11-20 11:30:25.137734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.544 [2024-11-20 11:30:25.137889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.544 [2024-11-20 11:30:25.137896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.544 [2024-11-20 11:30:25.137901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.544 [2024-11-20 11:30:25.137907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.544 [2024-11-20 11:30:25.149684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.544 [2024-11-20 11:30:25.150060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.544 [2024-11-20 11:30:25.150079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.544 [2024-11-20 11:30:25.150085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.544 [2024-11-20 11:30:25.150242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.544 [2024-11-20 11:30:25.150395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.544 [2024-11-20 11:30:25.150401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.544 [2024-11-20 11:30:25.150406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.544 [2024-11-20 11:30:25.150411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.544 [2024-11-20 11:30:25.162318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.544 [2024-11-20 11:30:25.162812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.544 [2024-11-20 11:30:25.162843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.544 [2024-11-20 11:30:25.162852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.544 [2024-11-20 11:30:25.163020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.544 [2024-11-20 11:30:25.163180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.544 [2024-11-20 11:30:25.163187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.544 [2024-11-20 11:30:25.163193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.544 [2024-11-20 11:30:25.163201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.544 [2024-11-20 11:30:25.174961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.544 [2024-11-20 11:30:25.175491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.544 [2024-11-20 11:30:25.175523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.544 [2024-11-20 11:30:25.175532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.544 [2024-11-20 11:30:25.175699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.544 [2024-11-20 11:30:25.175853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.544 [2024-11-20 11:30:25.175860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.544 [2024-11-20 11:30:25.175866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.544 [2024-11-20 11:30:25.175872] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.544 [2024-11-20 11:30:25.187639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.544 [2024-11-20 11:30:25.188248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.544 [2024-11-20 11:30:25.188280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.544 [2024-11-20 11:30:25.188289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.544 [2024-11-20 11:30:25.188459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.544 [2024-11-20 11:30:25.188614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.544 [2024-11-20 11:30:25.188620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.545 [2024-11-20 11:30:25.188626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.545 [2024-11-20 11:30:25.188632] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.545 [2024-11-20 11:30:25.200255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.545 [2024-11-20 11:30:25.200753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.545 [2024-11-20 11:30:25.200784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.545 [2024-11-20 11:30:25.200793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.545 [2024-11-20 11:30:25.200961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.545 [2024-11-20 11:30:25.201115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.545 [2024-11-20 11:30:25.201123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.545 [2024-11-20 11:30:25.201129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.545 [2024-11-20 11:30:25.201135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.545 [2024-11-20 11:30:25.212907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.545 [2024-11-20 11:30:25.213469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.545 [2024-11-20 11:30:25.213501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.545 [2024-11-20 11:30:25.213510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.545 [2024-11-20 11:30:25.213677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.545 [2024-11-20 11:30:25.213831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.545 [2024-11-20 11:30:25.213838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.545 [2024-11-20 11:30:25.213844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.545 [2024-11-20 11:30:25.213850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.545 [2024-11-20 11:30:25.225613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.545 [2024-11-20 11:30:25.226242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.545 [2024-11-20 11:30:25.226274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.545 [2024-11-20 11:30:25.226283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.545 [2024-11-20 11:30:25.226452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.545 [2024-11-20 11:30:25.226607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.545 [2024-11-20 11:30:25.226617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.545 [2024-11-20 11:30:25.226623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.545 [2024-11-20 11:30:25.226629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.545 [2024-11-20 11:30:25.238257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.545 [2024-11-20 11:30:25.238868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.545 [2024-11-20 11:30:25.238899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.545 [2024-11-20 11:30:25.238908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.545 [2024-11-20 11:30:25.239075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.545 [2024-11-20 11:30:25.239235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.545 [2024-11-20 11:30:25.239243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.545 [2024-11-20 11:30:25.239249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.545 [2024-11-20 11:30:25.239255] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.545 [2024-11-20 11:30:25.250888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.545 [2024-11-20 11:30:25.251380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.545 [2024-11-20 11:30:25.251397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.545 [2024-11-20 11:30:25.251402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.545 [2024-11-20 11:30:25.251554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.545 [2024-11-20 11:30:25.251705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.545 [2024-11-20 11:30:25.251713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.545 [2024-11-20 11:30:25.251718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.545 [2024-11-20 11:30:25.251723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.545 [2024-11-20 11:30:25.263618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.545 [2024-11-20 11:30:25.263971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.545 [2024-11-20 11:30:25.263985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.545 [2024-11-20 11:30:25.263990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.545 [2024-11-20 11:30:25.264141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.545 [2024-11-20 11:30:25.264297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.545 [2024-11-20 11:30:25.264305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.545 [2024-11-20 11:30:25.264310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.545 [2024-11-20 11:30:25.264314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.545 [2024-11-20 11:30:25.276354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.545 [2024-11-20 11:30:25.276905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.545 [2024-11-20 11:30:25.276936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.545 [2024-11-20 11:30:25.276945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.545 [2024-11-20 11:30:25.277112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.545 [2024-11-20 11:30:25.277272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.545 [2024-11-20 11:30:25.277280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.545 [2024-11-20 11:30:25.277287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.545 [2024-11-20 11:30:25.277293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.809 [2024-11-20 11:30:25.289056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.809 [2024-11-20 11:30:25.289637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.809 [2024-11-20 11:30:25.289669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.809 [2024-11-20 11:30:25.289678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.809 [2024-11-20 11:30:25.289845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.809 [2024-11-20 11:30:25.289999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.809 [2024-11-20 11:30:25.290006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.809 [2024-11-20 11:30:25.290012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.809 [2024-11-20 11:30:25.290018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.809 [2024-11-20 11:30:25.301778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.809 [2024-11-20 11:30:25.302242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.809 [2024-11-20 11:30:25.302258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.809 [2024-11-20 11:30:25.302264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.809 [2024-11-20 11:30:25.302416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.809 [2024-11-20 11:30:25.302568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.809 [2024-11-20 11:30:25.302575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.809 [2024-11-20 11:30:25.302580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.809 [2024-11-20 11:30:25.302586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.809 [2024-11-20 11:30:25.314394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.809 [2024-11-20 11:30:25.315004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.809 [2024-11-20 11:30:25.315039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.809 [2024-11-20 11:30:25.315048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.809 [2024-11-20 11:30:25.315220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.809 [2024-11-20 11:30:25.315376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.809 [2024-11-20 11:30:25.315383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.809 [2024-11-20 11:30:25.315389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.809 [2024-11-20 11:30:25.315395] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.809 [2024-11-20 11:30:25.327014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.809 [2024-11-20 11:30:25.327530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.809 [2024-11-20 11:30:25.327546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.809 [2024-11-20 11:30:25.327552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.809 [2024-11-20 11:30:25.327703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.809 [2024-11-20 11:30:25.327855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.809 [2024-11-20 11:30:25.327862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.809 [2024-11-20 11:30:25.327868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.809 [2024-11-20 11:30:25.327873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.809 [2024-11-20 11:30:25.339628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.809 [2024-11-20 11:30:25.340070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.809 [2024-11-20 11:30:25.340101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.809 [2024-11-20 11:30:25.340110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.809 [2024-11-20 11:30:25.340283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.809 [2024-11-20 11:30:25.340438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.809 [2024-11-20 11:30:25.340445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.809 [2024-11-20 11:30:25.340450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.809 [2024-11-20 11:30:25.340457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.809 4253.17 IOPS, 16.61 MiB/s [2024-11-20T10:30:25.551Z] [2024-11-20 11:30:25.352360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.809 [2024-11-20 11:30:25.352959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.809 [2024-11-20 11:30:25.352991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.809 [2024-11-20 11:30:25.353000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.809 [2024-11-20 11:30:25.353176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.809 [2024-11-20 11:30:25.353331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.809 [2024-11-20 11:30:25.353339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.809 [2024-11-20 11:30:25.353346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.809 [2024-11-20 11:30:25.353353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.809 [2024-11-20 11:30:25.364981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.809 [2024-11-20 11:30:25.365426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.809 [2024-11-20 11:30:25.365458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.809 [2024-11-20 11:30:25.365467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.809 [2024-11-20 11:30:25.365634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.809 [2024-11-20 11:30:25.365788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.809 [2024-11-20 11:30:25.365795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.809 [2024-11-20 11:30:25.365801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.809 [2024-11-20 11:30:25.365807] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.809 [2024-11-20 11:30:25.377724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.809 [2024-11-20 11:30:25.378198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.809 [2024-11-20 11:30:25.378215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.809 [2024-11-20 11:30:25.378220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.809 [2024-11-20 11:30:25.378372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.809 [2024-11-20 11:30:25.378523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.809 [2024-11-20 11:30:25.378530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.809 [2024-11-20 11:30:25.378535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.809 [2024-11-20 11:30:25.378540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.809 [2024-11-20 11:30:25.390445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.809 [2024-11-20 11:30:25.390886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.809 [2024-11-20 11:30:25.390919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.809 [2024-11-20 11:30:25.390928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.809 [2024-11-20 11:30:25.391095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.809 [2024-11-20 11:30:25.391256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.809 [2024-11-20 11:30:25.391268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.809 [2024-11-20 11:30:25.391274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.809 [2024-11-20 11:30:25.391281] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.809 [2024-11-20 11:30:25.403195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.810 [2024-11-20 11:30:25.403661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.810 [2024-11-20 11:30:25.403677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.810 [2024-11-20 11:30:25.403683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.810 [2024-11-20 11:30:25.403835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.810 [2024-11-20 11:30:25.403987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.810 [2024-11-20 11:30:25.403994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.810 [2024-11-20 11:30:25.404000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.810 [2024-11-20 11:30:25.404005] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.810 [2024-11-20 11:30:25.415922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.810 [2024-11-20 11:30:25.416414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.810 [2024-11-20 11:30:25.416428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.810 [2024-11-20 11:30:25.416434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.810 [2024-11-20 11:30:25.416585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.810 [2024-11-20 11:30:25.416737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.810 [2024-11-20 11:30:25.416744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.810 [2024-11-20 11:30:25.416750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.810 [2024-11-20 11:30:25.416756] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.810 [2024-11-20 11:30:25.428661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.810 [2024-11-20 11:30:25.428995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.810 [2024-11-20 11:30:25.429008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.810 [2024-11-20 11:30:25.429014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.810 [2024-11-20 11:30:25.429168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.810 [2024-11-20 11:30:25.429320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.810 [2024-11-20 11:30:25.429327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.810 [2024-11-20 11:30:25.429332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.810 [2024-11-20 11:30:25.429337] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.810 [2024-11-20 11:30:25.441277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.810 [2024-11-20 11:30:25.441729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.810 [2024-11-20 11:30:25.441742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.810 [2024-11-20 11:30:25.441748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.810 [2024-11-20 11:30:25.441898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.810 [2024-11-20 11:30:25.442050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.810 [2024-11-20 11:30:25.442057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.810 [2024-11-20 11:30:25.442062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.810 [2024-11-20 11:30:25.442067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.810 [2024-11-20 11:30:25.453978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.810 [2024-11-20 11:30:25.454494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.810 [2024-11-20 11:30:25.454525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.810 [2024-11-20 11:30:25.454534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.810 [2024-11-20 11:30:25.454701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.810 [2024-11-20 11:30:25.454855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.810 [2024-11-20 11:30:25.454862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.810 [2024-11-20 11:30:25.454868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.810 [2024-11-20 11:30:25.454874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.810 [2024-11-20 11:30:25.466646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.810 [2024-11-20 11:30:25.467125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.810 [2024-11-20 11:30:25.467140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.810 [2024-11-20 11:30:25.467146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.810 [2024-11-20 11:30:25.467301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.810 [2024-11-20 11:30:25.467453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.810 [2024-11-20 11:30:25.467460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.810 [2024-11-20 11:30:25.467465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.810 [2024-11-20 11:30:25.467470] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.810 [2024-11-20 11:30:25.479398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.810 [2024-11-20 11:30:25.479922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.810 [2024-11-20 11:30:25.479939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.810 [2024-11-20 11:30:25.479945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.810 [2024-11-20 11:30:25.480095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.810 [2024-11-20 11:30:25.480252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.810 [2024-11-20 11:30:25.480259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.810 [2024-11-20 11:30:25.480265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.810 [2024-11-20 11:30:25.480270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.810 [2024-11-20 11:30:25.492025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.810 [2024-11-20 11:30:25.492503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.810 [2024-11-20 11:30:25.492516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.810 [2024-11-20 11:30:25.492521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.810 [2024-11-20 11:30:25.492672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.810 [2024-11-20 11:30:25.492824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.810 [2024-11-20 11:30:25.492830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.810 [2024-11-20 11:30:25.492835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.810 [2024-11-20 11:30:25.492841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.810 [2024-11-20 11:30:25.504744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.810 [2024-11-20 11:30:25.505432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.810 [2024-11-20 11:30:25.505463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.810 [2024-11-20 11:30:25.505472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.810 [2024-11-20 11:30:25.505639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.810 [2024-11-20 11:30:25.505793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.810 [2024-11-20 11:30:25.505801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.810 [2024-11-20 11:30:25.505807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.810 [2024-11-20 11:30:25.505813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.810 [2024-11-20 11:30:25.517452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.810 [2024-11-20 11:30:25.517975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.810 [2024-11-20 11:30:25.517991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.810 [2024-11-20 11:30:25.517997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.810 [2024-11-20 11:30:25.518152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.810 [2024-11-20 11:30:25.518308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.810 [2024-11-20 11:30:25.518317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.810 [2024-11-20 11:30:25.518322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.810 [2024-11-20 11:30:25.518328] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.810 [2024-11-20 11:30:25.530093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.811 [2024-11-20 11:30:25.530690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.811 [2024-11-20 11:30:25.530722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.811 [2024-11-20 11:30:25.530731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.811 [2024-11-20 11:30:25.530898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.811 [2024-11-20 11:30:25.531052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.811 [2024-11-20 11:30:25.531059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.811 [2024-11-20 11:30:25.531065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.811 [2024-11-20 11:30:25.531071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.811 [2024-11-20 11:30:25.542707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.811 [2024-11-20 11:30:25.543216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.811 [2024-11-20 11:30:25.543232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:32.811 [2024-11-20 11:30:25.543238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:32.811 [2024-11-20 11:30:25.543390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:32.811 [2024-11-20 11:30:25.543541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.811 [2024-11-20 11:30:25.543547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.811 [2024-11-20 11:30:25.543553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.811 [2024-11-20 11:30:25.543558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.075 [2024-11-20 11:30:25.555339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.075 [2024-11-20 11:30:25.555793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.075 [2024-11-20 11:30:25.555807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:33.075 [2024-11-20 11:30:25.555813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:33.075 [2024-11-20 11:30:25.555964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:33.075 [2024-11-20 11:30:25.556116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.075 [2024-11-20 11:30:25.556123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.075 [2024-11-20 11:30:25.556133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.075 [2024-11-20 11:30:25.556138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.075 [2024-11-20 11:30:25.568076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.075 [2024-11-20 11:30:25.568388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.075 [2024-11-20 11:30:25.568402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:33.075 [2024-11-20 11:30:25.568408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:33.075 [2024-11-20 11:30:25.568559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:33.075 [2024-11-20 11:30:25.568711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.075 [2024-11-20 11:30:25.568718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.075 [2024-11-20 11:30:25.568723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.075 [2024-11-20 11:30:25.568728] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.075 [2024-11-20 11:30:25.580781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.075 [2024-11-20 11:30:25.581270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.075 [2024-11-20 11:30:25.581285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:33.075 [2024-11-20 11:30:25.581291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:33.075 [2024-11-20 11:30:25.581442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:33.075 [2024-11-20 11:30:25.581593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.075 [2024-11-20 11:30:25.581599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.075 [2024-11-20 11:30:25.581605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.075 [2024-11-20 11:30:25.581610] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.075 [2024-11-20 11:30:25.593522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.075 [2024-11-20 11:30:25.594020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.075 [2024-11-20 11:30:25.594052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:33.075 [2024-11-20 11:30:25.594062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:33.075 [2024-11-20 11:30:25.594240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:33.075 [2024-11-20 11:30:25.594395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.075 [2024-11-20 11:30:25.594402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.075 [2024-11-20 11:30:25.594408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.075 [2024-11-20 11:30:25.594414] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.075 [2024-11-20 11:30:25.606242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.075 [2024-11-20 11:30:25.606711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.075 [2024-11-20 11:30:25.606727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:33.075 [2024-11-20 11:30:25.606733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:33.075 [2024-11-20 11:30:25.606884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:33.075 [2024-11-20 11:30:25.607036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.075 [2024-11-20 11:30:25.607042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.075 [2024-11-20 11:30:25.607047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.075 [2024-11-20 11:30:25.607052] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.075 [2024-11-20 11:30:25.618979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.075 [2024-11-20 11:30:25.619500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.075 [2024-11-20 11:30:25.619532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:33.075 [2024-11-20 11:30:25.619541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:33.075 [2024-11-20 11:30:25.619708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:33.075 [2024-11-20 11:30:25.619862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.075 [2024-11-20 11:30:25.619869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.075 [2024-11-20 11:30:25.619875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.075 [2024-11-20 11:30:25.619882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.075 [2024-11-20 11:30:25.631658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.075 [2024-11-20 11:30:25.632232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.075 [2024-11-20 11:30:25.632263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:33.075 [2024-11-20 11:30:25.632272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:33.075 [2024-11-20 11:30:25.632443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:33.075 [2024-11-20 11:30:25.632598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.075 [2024-11-20 11:30:25.632605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.075 [2024-11-20 11:30:25.632611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.075 [2024-11-20 11:30:25.632618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.075 [2024-11-20 11:30:25.644401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.075 [2024-11-20 11:30:25.645001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.075 [2024-11-20 11:30:25.645033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:33.075 [2024-11-20 11:30:25.645046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:33.075 [2024-11-20 11:30:25.645219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:33.075 [2024-11-20 11:30:25.645374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.075 [2024-11-20 11:30:25.645382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.075 [2024-11-20 11:30:25.645390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.075 [2024-11-20 11:30:25.645396] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.075 [2024-11-20 11:30:25.657031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.075 [2024-11-20 11:30:25.657573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.075 [2024-11-20 11:30:25.657589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:33.075 [2024-11-20 11:30:25.657595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:33.076 [2024-11-20 11:30:25.657747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:33.076 [2024-11-20 11:30:25.657899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.076 [2024-11-20 11:30:25.657906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.076 [2024-11-20 11:30:25.657912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.076 [2024-11-20 11:30:25.657917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.076 [2024-11-20 11:30:25.669689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.076 [2024-11-20 11:30:25.670144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.076 [2024-11-20 11:30:25.670162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:33.076 [2024-11-20 11:30:25.670168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:33.076 [2024-11-20 11:30:25.670319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:33.076 [2024-11-20 11:30:25.670471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.076 [2024-11-20 11:30:25.670478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.076 [2024-11-20 11:30:25.670484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.076 [2024-11-20 11:30:25.670489] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.076 [2024-11-20 11:30:25.682402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.076 [2024-11-20 11:30:25.682860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.076 [2024-11-20 11:30:25.682874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:33.076 [2024-11-20 11:30:25.682879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:33.076 [2024-11-20 11:30:25.683030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:33.076 [2024-11-20 11:30:25.683190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.076 [2024-11-20 11:30:25.683197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.076 [2024-11-20 11:30:25.683203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.076 [2024-11-20 11:30:25.683208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.076 [2024-11-20 11:30:25.695049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.076 [2024-11-20 11:30:25.695602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.076 [2024-11-20 11:30:25.695634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:33.076 [2024-11-20 11:30:25.695642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:33.076 [2024-11-20 11:30:25.695809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:33.076 [2024-11-20 11:30:25.695964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.076 [2024-11-20 11:30:25.695971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.076 [2024-11-20 11:30:25.695977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.076 [2024-11-20 11:30:25.695984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.076 11:30:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:33.076 [2024-11-20 11:30:25.707759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.076 11:30:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:29:33.076 [2024-11-20 11:30:25.708224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.076 [2024-11-20 11:30:25.708240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:33.076 [2024-11-20 11:30:25.708246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:33.076 [2024-11-20 11:30:25.708398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:33.076 11:30:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:33.076 [2024-11-20 11:30:25.708550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.076 [2024-11-20 11:30:25.708558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.076 [2024-11-20 11:30:25.708563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.076 [2024-11-20 11:30:25.708569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.076 11:30:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:33.076 11:30:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:33.076 [2024-11-20 11:30:25.720491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.076 [2024-11-20 11:30:25.720951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.076 [2024-11-20 11:30:25.720966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:33.076 [2024-11-20 11:30:25.720971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:33.076 [2024-11-20 11:30:25.721127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:33.076 [2024-11-20 11:30:25.721283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.076 [2024-11-20 11:30:25.721292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.076 [2024-11-20 11:30:25.721297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.076 [2024-11-20 11:30:25.721303] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.076 [2024-11-20 11:30:25.733214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.076 [2024-11-20 11:30:25.733723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.076 [2024-11-20 11:30:25.733737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:33.076 [2024-11-20 11:30:25.733743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:33.076 [2024-11-20 11:30:25.733894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:33.076 [2024-11-20 11:30:25.734047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.076 [2024-11-20 11:30:25.734054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.076 [2024-11-20 11:30:25.734060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.076 [2024-11-20 11:30:25.734065] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.076 [2024-11-20 11:30:25.745834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.076 [2024-11-20 11:30:25.746432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.076 [2024-11-20 11:30:25.746465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:33.076 [2024-11-20 11:30:25.746474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:33.076 [2024-11-20 11:30:25.746641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:33.076 [2024-11-20 11:30:25.746804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.076 [2024-11-20 11:30:25.746812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.076 [2024-11-20 11:30:25.746818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.076 [2024-11-20 11:30:25.746825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.076 11:30:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:33.076 11:30:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:33.076 11:30:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.076 11:30:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:33.076 [2024-11-20 11:30:25.753875] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:33.076 [2024-11-20 11:30:25.758458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.076 [2024-11-20 11:30:25.759025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.076 [2024-11-20 11:30:25.759060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:33.076 [2024-11-20 11:30:25.759069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:33.076 [2024-11-20 11:30:25.759242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:33.076 11:30:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.076 [2024-11-20 11:30:25.759397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.076 [2024-11-20 11:30:25.759405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.076 [2024-11-20 11:30:25.759411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.076 [2024-11-20 11:30:25.759417] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.076 11:30:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:33.076 11:30:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.076 11:30:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:33.076 [2024-11-20 11:30:25.771187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.076 [2024-11-20 11:30:25.771802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.077 [2024-11-20 11:30:25.771834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:33.077 [2024-11-20 11:30:25.771843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:33.077 [2024-11-20 11:30:25.772009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:33.077 [2024-11-20 11:30:25.772171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.077 [2024-11-20 11:30:25.772179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.077 [2024-11-20 11:30:25.772185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.077 [2024-11-20 11:30:25.772190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.077 [2024-11-20 11:30:25.783819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.077 [2024-11-20 11:30:25.784421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.077 [2024-11-20 11:30:25.784453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:33.077 [2024-11-20 11:30:25.784461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:33.077 [2024-11-20 11:30:25.784629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:33.077 [2024-11-20 11:30:25.784784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.077 [2024-11-20 11:30:25.784792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.077 [2024-11-20 11:30:25.784797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.077 [2024-11-20 11:30:25.784804] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.077 Malloc0 00:29:33.077 11:30:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.077 11:30:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:33.077 11:30:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.077 11:30:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:33.077 [2024-11-20 11:30:25.796434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.077 [2024-11-20 11:30:25.797024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.077 [2024-11-20 11:30:25.797056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:33.077 [2024-11-20 11:30:25.797065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:33.077 [2024-11-20 11:30:25.797238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:33.077 [2024-11-20 11:30:25.797393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.077 [2024-11-20 11:30:25.797401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.077 [2024-11-20 11:30:25.797406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.077 [2024-11-20 11:30:25.797412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.077 11:30:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.077 11:30:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:33.077 11:30:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.077 11:30:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:33.077 [2024-11-20 11:30:25.809180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.077 [2024-11-20 11:30:25.809787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.077 [2024-11-20 11:30:25.809818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6c8000 with addr=10.0.0.2, port=4420 00:29:33.077 [2024-11-20 11:30:25.809827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c8000 is same with the state(6) to be set 00:29:33.077 [2024-11-20 11:30:25.809993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c8000 (9): Bad file descriptor 00:29:33.077 [2024-11-20 11:30:25.810147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.077 [2024-11-20 11:30:25.810154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.077 [2024-11-20 11:30:25.810167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.077 [2024-11-20 11:30:25.810173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.077 11:30:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.077 11:30:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:33.077 11:30:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.077 11:30:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:33.337 [2024-11-20 11:30:25.817829] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:33.337 [2024-11-20 11:30:25.821812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.337 11:30:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.337 11:30:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2918013 00:29:33.337 [2024-11-20 11:30:25.886951] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:29:34.849 4483.29 IOPS, 17.51 MiB/s [2024-11-20T10:30:28.534Z] 5530.25 IOPS, 21.60 MiB/s [2024-11-20T10:30:29.477Z] 6345.22 IOPS, 24.79 MiB/s [2024-11-20T10:30:30.421Z] 7009.70 IOPS, 27.38 MiB/s [2024-11-20T10:30:31.807Z] 7533.73 IOPS, 29.43 MiB/s [2024-11-20T10:30:32.378Z] 7967.83 IOPS, 31.12 MiB/s [2024-11-20T10:30:33.764Z] 8331.31 IOPS, 32.54 MiB/s [2024-11-20T10:30:34.706Z] 8664.50 IOPS, 33.85 MiB/s [2024-11-20T10:30:34.706Z] 8949.73 IOPS, 34.96 MiB/s 00:29:41.964 Latency(us) 00:29:41.964 [2024-11-20T10:30:34.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:41.964 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:41.964 Verification LBA range: start 0x0 length 0x4000 00:29:41.964 Nvme1n1 : 15.01 8948.07 34.95 13177.27 0.00 5765.89 552.96 14964.05 00:29:41.964 [2024-11-20T10:30:34.706Z] =================================================================================================================== 00:29:41.964 [2024-11-20T10:30:34.706Z] Total : 8948.07 34.95 13177.27 0.00 5765.89 552.96 14964.05 00:29:41.964 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:41.964 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:41.964 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.964 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:41.964 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.964 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:41.964 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:41.964 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:41.964 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:29:41.964 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:41.964 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:29:41.964 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:41.964 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:41.964 rmmod nvme_tcp 00:29:41.964 rmmod nvme_fabrics 00:29:41.964 rmmod nvme_keyring 00:29:41.964 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:41.964 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:29:41.964 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:29:41.964 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2919115 ']' 00:29:41.964 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2919115 00:29:41.964 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2919115 ']' 00:29:41.964 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2919115 00:29:41.964 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:29:41.964 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:41.964 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2919115 00:29:41.964 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:41.964 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:41.964 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2919115' 00:29:41.965 killing process with pid 2919115 00:29:41.965 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2919115 00:29:41.965 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2919115 00:29:42.225 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:42.225 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:42.225 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:42.225 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:29:42.225 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:29:42.225 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:42.225 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:29:42.225 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:42.225 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:42.225 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.225 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:42.225 11:30:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.139 11:30:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:44.139 00:29:44.139 real 0m28.345s 00:29:44.139 user 1m3.413s 00:29:44.139 sys 0m7.750s 00:29:44.139 11:30:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:44.139 11:30:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:44.139 ************************************ 00:29:44.139 END TEST nvmf_bdevperf 00:29:44.139 ************************************ 00:29:44.139 11:30:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:44.139 11:30:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:44.139 11:30:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:44.139 11:30:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.401 ************************************ 00:29:44.401 START TEST nvmf_target_disconnect 00:29:44.401 ************************************ 00:29:44.401 11:30:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:44.401 * Looking for test storage... 00:29:44.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:44.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.401 --rc genhtml_branch_coverage=1 00:29:44.401 --rc genhtml_function_coverage=1 00:29:44.401 --rc genhtml_legend=1 00:29:44.401 --rc geninfo_all_blocks=1 00:29:44.401 --rc geninfo_unexecuted_blocks=1 00:29:44.401 00:29:44.401 ' 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:44.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.401 --rc genhtml_branch_coverage=1 00:29:44.401 --rc genhtml_function_coverage=1 00:29:44.401 --rc genhtml_legend=1 00:29:44.401 --rc geninfo_all_blocks=1 00:29:44.401 --rc geninfo_unexecuted_blocks=1 00:29:44.401 00:29:44.401 ' 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:44.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.401 --rc genhtml_branch_coverage=1 00:29:44.401 --rc genhtml_function_coverage=1 00:29:44.401 --rc genhtml_legend=1 00:29:44.401 --rc geninfo_all_blocks=1 00:29:44.401 --rc geninfo_unexecuted_blocks=1 00:29:44.401 00:29:44.401 ' 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:44.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.401 --rc genhtml_branch_coverage=1 00:29:44.401 --rc genhtml_function_coverage=1 00:29:44.401 --rc genhtml_legend=1 00:29:44.401 --rc geninfo_all_blocks=1 00:29:44.401 --rc geninfo_unexecuted_blocks=1 00:29:44.401 00:29:44.401 ' 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:44.401 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:44.402 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:44.402 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:44.402 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:44.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:44.402 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:44.402 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:44.402 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:44.402 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:44.402 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:44.402 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:44.402 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:44.402 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:44.402 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:44.402 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:44.402 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:44.402 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:44.402 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.402 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:44.402 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.663 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:44.663 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:44.663 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:29:44.663 11:30:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:52.927 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:52.927 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:29:52.927 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:52.927 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:52.927 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:52.927 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:52.927 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:52.928 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:52.928 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:52.928 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:52.928 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:52.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:52.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:29:52.928 00:29:52.928 --- 10.0.0.2 ping statistics --- 00:29:52.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.928 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:52.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:52.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:29:52.928 00:29:52.928 --- 10.0.0.1 ping statistics --- 00:29:52.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.928 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:52.928 ************************************ 00:29:52.928 START TEST nvmf_target_disconnect_tc1 00:29:52.928 ************************************ 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:52.928 [2024-11-20 11:30:44.657276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.928 [2024-11-20 11:30:44.657374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf8bad0 with addr=10.0.0.2, port=4420 00:29:52.928 [2024-11-20 11:30:44.657401] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:52.928 [2024-11-20 11:30:44.657413] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:52.928 [2024-11-20 11:30:44.657421] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:29:52.928 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:52.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:52.928 Initializing NVMe Controllers 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:52.928 00:29:52.928 real 0m0.147s 00:29:52.928 user 0m0.066s 00:29:52.928 sys 0m0.080s 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:52.928 ************************************ 00:29:52.928 END TEST nvmf_target_disconnect_tc1 00:29:52.928 ************************************ 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:52.928 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:52.928 ************************************ 00:29:52.928 START TEST nvmf_target_disconnect_tc2 00:29:52.928 ************************************ 00:29:52.929 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:29:52.929 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:52.929 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:52.929 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:52.929 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:52.929 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:52.929 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2925172 00:29:52.929 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2925172 00:29:52.929 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:52.929 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2925172 ']' 00:29:52.929 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.929 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:52.929 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:52.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:52.929 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:52.929 11:30:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:52.929 [2024-11-20 11:30:44.823671] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:29:52.929 [2024-11-20 11:30:44.823728] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:52.929 [2024-11-20 11:30:44.922835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:52.929 [2024-11-20 11:30:44.975552] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:52.929 [2024-11-20 11:30:44.975602] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:52.929 [2024-11-20 11:30:44.975612] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:52.929 [2024-11-20 11:30:44.975620] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:52.929 [2024-11-20 11:30:44.975626] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:52.929 [2024-11-20 11:30:44.977659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:52.929 [2024-11-20 11:30:44.977822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:52.929 [2024-11-20 11:30:44.977985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:52.929 [2024-11-20 11:30:44.977985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:52.929 11:30:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:52.929 11:30:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:52.929 11:30:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:52.929 11:30:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:52.929 11:30:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.189 11:30:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:53.189 11:30:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:53.189 11:30:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.189 11:30:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.189 Malloc0 00:29:53.189 11:30:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.189 11:30:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:53.189 11:30:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.189 11:30:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.189 [2024-11-20 11:30:45.735660] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:53.189 11:30:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.189 11:30:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:53.189 11:30:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.189 11:30:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.189 11:30:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.189 11:30:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:53.189 11:30:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.189 11:30:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.189 11:30:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.189 11:30:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:53.189 11:30:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.189 11:30:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.189 [2024-11-20 11:30:45.776055] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:53.189 11:30:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.189 11:30:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:53.189 11:30:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.189 11:30:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.189 11:30:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.189 11:30:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2925519 00:29:53.189 11:30:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:53.189 11:30:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:55.109 11:30:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2925172 00:29:55.109 11:30:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:55.109 Read completed with error (sct=0, sc=8) 00:29:55.109 starting I/O failed 00:29:55.109 Read completed with error (sct=0, sc=8) 00:29:55.109 starting I/O failed 00:29:55.109 Read completed with error (sct=0, sc=8) 00:29:55.109 starting I/O failed 00:29:55.109 Read completed with error (sct=0, sc=8) 00:29:55.109 starting I/O failed 00:29:55.109 Read completed with error (sct=0, sc=8) 00:29:55.109 starting I/O failed 00:29:55.109 Read completed with error (sct=0, sc=8) 00:29:55.109 starting I/O failed 00:29:55.109 Read completed with error (sct=0, sc=8) 00:29:55.109 starting I/O failed 00:29:55.110 Read completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Write completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Write completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Write completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Read completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Read completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Write completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Read completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Read completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Write completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Write completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Write completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Write completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Write completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Read completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Write completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Write completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Write completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Read completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Read completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Read completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Write completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Write completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Write completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Read completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 [2024-11-20 11:30:47.814983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:55.110 Read completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Read completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Read completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Read completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Read completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Write completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Write completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Write completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Read completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Write completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Write completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Write completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Read completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Read completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Read completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Write completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Write completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Write completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Read completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Read completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Write completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Write completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Read completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Read completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Write completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Write completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Read completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Read completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Write completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Write completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Read completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 Read completed with error (sct=0, sc=8) 00:29:55.110 starting I/O failed 00:29:55.110 [2024-11-20 11:30:47.815392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:55.110 [2024-11-20 11:30:47.815849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-20 11:30:47.815875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-20 11:30:47.816410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-20 11:30:47.816464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-20 11:30:47.816791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-20 11:30:47.816807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-20 11:30:47.817126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-20 11:30:47.817138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-20 11:30:47.817505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-20 11:30:47.817517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-20 11:30:47.817875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-20 11:30:47.817886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-20 11:30:47.818448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-20 11:30:47.818506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-20 11:30:47.818867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-20 11:30:47.818881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-20 11:30:47.819197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-20 11:30:47.819210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-20 11:30:47.819586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-20 11:30:47.819597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-20 11:30:47.819925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-20 11:30:47.819937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-20 11:30:47.820276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-20 11:30:47.820288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-20 11:30:47.820577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.110 [2024-11-20 11:30:47.820589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.110 qpair failed and we were unable to recover it. 00:29:55.110 [2024-11-20 11:30:47.820943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.820955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.821324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.821336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.821671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.821683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.822037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.822048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.822276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.822287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.822636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.822648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.822994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.823006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.823223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.823235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.823570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.823582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.823767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.823779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.823942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.823955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.824131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.824147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.824409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.824422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.824722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.824734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.825092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.825104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.825319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.825331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.825635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.825646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.825982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.825994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.826292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.826302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.826604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.826615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.826847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.826858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.827062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.827073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.827427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.827438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.827757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.827768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.828104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.828115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.828453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.828463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.828877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.828886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.829203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.829213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.829440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.829450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.829803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.829813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.830033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.830043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.830260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.830271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.830625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.830635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.830940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.830950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.831318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.831328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.831665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.831675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.831875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.831887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.111 [2024-11-20 11:30:47.832202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.111 [2024-11-20 11:30:47.832212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.111 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.832538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.832549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.832840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.832850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.833150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.833181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.833511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.833521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.833829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.833840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.834133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.834143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.834447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.834457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.834804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.834814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.835112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.835122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.835494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.835505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.835790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.835800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.836097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.836107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.836403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.836414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.836631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.836647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.836970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.836980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.837293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.837305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.837619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.837630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.837934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.837945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.838272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.838283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.838620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.838631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.838865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.838874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.839188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.839199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.839496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.839506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.839866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.839876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.840164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.840176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.840475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.840485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.840820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.840830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.841127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.841137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.841515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.841527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.841872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.841882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.842178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.842189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.842516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.842525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.842826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.842836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.843156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.843172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.843494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.843504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.843818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.843828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.112 [2024-11-20 11:30:47.844116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.112 [2024-11-20 11:30:47.844125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.112 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-20 11:30:47.844340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-20 11:30:47.844351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-20 11:30:47.844670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-20 11:30:47.844680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-20 11:30:47.844988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-20 11:30:47.844998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-20 11:30:47.845214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-20 11:30:47.845226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-20 11:30:47.845616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-20 11:30:47.845630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-20 11:30:47.845951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-20 11:30:47.845963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-20 11:30:47.846275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-20 11:30:47.846288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-20 11:30:47.846442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-20 11:30:47.846456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.113 [2024-11-20 11:30:47.846795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.113 [2024-11-20 11:30:47.846807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.113 qpair failed and we were unable to recover it. 00:29:55.385 [2024-11-20 11:30:47.847107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.385 [2024-11-20 11:30:47.847122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.385 qpair failed and we were unable to recover it. 00:29:55.385 [2024-11-20 11:30:47.847301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.385 [2024-11-20 11:30:47.847316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.385 qpair failed and we were unable to recover it. 00:29:55.385 [2024-11-20 11:30:47.847648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.385 [2024-11-20 11:30:47.847660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.385 qpair failed and we were unable to recover it. 00:29:55.385 [2024-11-20 11:30:47.847965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.385 [2024-11-20 11:30:47.847978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.385 qpair failed and we were unable to recover it. 00:29:55.385 [2024-11-20 11:30:47.848325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.385 [2024-11-20 11:30:47.848338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.385 qpair failed and we were unable to recover it. 00:29:55.385 [2024-11-20 11:30:47.848665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.385 [2024-11-20 11:30:47.848679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.385 qpair failed and we were unable to recover it. 00:29:55.385 [2024-11-20 11:30:47.848996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.385 [2024-11-20 11:30:47.849009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.385 qpair failed and we were unable to recover it. 00:29:55.385 [2024-11-20 11:30:47.849363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.385 [2024-11-20 11:30:47.849380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.385 qpair failed and we were unable to recover it. 00:29:55.385 [2024-11-20 11:30:47.849761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.385 [2024-11-20 11:30:47.849775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.385 qpair failed and we were unable to recover it. 00:29:55.385 [2024-11-20 11:30:47.850078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.385 [2024-11-20 11:30:47.850090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.385 qpair failed and we were unable to recover it. 00:29:55.385 [2024-11-20 11:30:47.850393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.385 [2024-11-20 11:30:47.850406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.385 qpair failed and we were unable to recover it. 00:29:55.385 [2024-11-20 11:30:47.850730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.385 [2024-11-20 11:30:47.850742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.385 qpair failed and we were unable to recover it. 00:29:55.385 [2024-11-20 11:30:47.851065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.385 [2024-11-20 11:30:47.851077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.385 qpair failed and we were unable to recover it. 00:29:55.385 [2024-11-20 11:30:47.851306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.385 [2024-11-20 11:30:47.851320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.385 qpair failed and we were unable to recover it. 00:29:55.385 [2024-11-20 11:30:47.851561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.385 [2024-11-20 11:30:47.851573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.385 qpair failed and we were unable to recover it. 00:29:55.385 [2024-11-20 11:30:47.851858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.385 [2024-11-20 11:30:47.851870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.385 qpair failed and we were unable to recover it. 00:29:55.385 [2024-11-20 11:30:47.852172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.852186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.852531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.852544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.852861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.852873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.853207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.853221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.853538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.853551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.853879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.853892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.854210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.854224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.854553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.854565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.854886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.854898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.855220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.855233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.855544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.855557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.855795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.855809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.856118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.856130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.856506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.856520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.856836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.856850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.857148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.857168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.857426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.857438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.857748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.857760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.858091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.858103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.858466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.858484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.858818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.858835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.859201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.859218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.859551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.859568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.859886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.859902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.860216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.860234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.860486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.860502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.860857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.860873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.861195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.861213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.861453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.861469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.861806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.861822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.862145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.862173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.862477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.862501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.862823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.862839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.863174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.863192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.863494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.863510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.863903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.863921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.864250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.864267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.864591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.386 [2024-11-20 11:30:47.864608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.386 qpair failed and we were unable to recover it. 00:29:55.386 [2024-11-20 11:30:47.864927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.864943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.865312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.865330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.865731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.865747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.866080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.866096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.866415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.866432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.866744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.866760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.867084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.867100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.867417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.867435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.867771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.867787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.868118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.868134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.868442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.868459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.868862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.868879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.869200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.869217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.869446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.869462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.869792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.869808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.870138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.870155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.870509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.870526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.870854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.870874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.871246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.871269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.871643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.871664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.872009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.872032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.872343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.872364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.872766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.872787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.873173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.873196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.873524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.873545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.873885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.873906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.874127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.874150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.874508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.874530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.874858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.874878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.875098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.875119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.875440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.875462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.875603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.875626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.876010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.876030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.876384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.876413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.876751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.876772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.877096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.877116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.877436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.387 [2024-11-20 11:30:47.877457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.387 qpair failed and we were unable to recover it. 00:29:55.387 [2024-11-20 11:30:47.877783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.877804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.878140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.878176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.878511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.878533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.878888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.878909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.879286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.879307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.879667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.879688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.880070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.880091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.880421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.880445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.880786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.880813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.881183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.881214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.881602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.881631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.882015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.882043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.882466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.882495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.882870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.882898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.883232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.883262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.883637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.883666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.883999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.884027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.884348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.884377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.884786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.884814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.885049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.885080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.885485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.885516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.885858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.885888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.886267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.886298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.886686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.886715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.887068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.887096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.887476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.887506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.887848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.887877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.888133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.888172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.888454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.888483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.888840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.888869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.889229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.889259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.889691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.889720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.890081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.890108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.890448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.890477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.890780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.890809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.891064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.891095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.891448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.891484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.388 [2024-11-20 11:30:47.891819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.388 [2024-11-20 11:30:47.891846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.388 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.892221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.892253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.892613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.892641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.892944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.892971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.893335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.893365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.893735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.893764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.894057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.894085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.894436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.894467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.894762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.894790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.895148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.895187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.895544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.895572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.895945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.895973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.896354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.896384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.896752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.896781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.897181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.897212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.897586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.897614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.897976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.898004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.898345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.898375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.898743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.898772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.899136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.899172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.899573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.899601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.899982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.900009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.900391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.900421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.900781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.900809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.901189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.901219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.901653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.901681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.902002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.902032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.902405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.902434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.902795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.902823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.903211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.903240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.903603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.903633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.903964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.903992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.904352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.904383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.904720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.904749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.905121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.905150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.905514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.905543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.905895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.905923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.906301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.389 [2024-11-20 11:30:47.906331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.389 qpair failed and we were unable to recover it. 00:29:55.389 [2024-11-20 11:30:47.906707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.390 [2024-11-20 11:30:47.906736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.390 qpair failed and we were unable to recover it. 00:29:55.390 [2024-11-20 11:30:47.907125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.390 [2024-11-20 11:30:47.907167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.390 qpair failed and we were unable to recover it. 00:29:55.390 [2024-11-20 11:30:47.907512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.390 [2024-11-20 11:30:47.907540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.390 qpair failed and we were unable to recover it. 00:29:55.390 [2024-11-20 11:30:47.907920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.390 [2024-11-20 11:30:47.907948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.390 qpair failed and we were unable to recover it. 00:29:55.390 [2024-11-20 11:30:47.908341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.390 [2024-11-20 11:30:47.908370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.390 qpair failed and we were unable to recover it. 00:29:55.390 [2024-11-20 11:30:47.908724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.390 [2024-11-20 11:30:47.908751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.390 qpair failed and we were unable to recover it. 00:29:55.390 [2024-11-20 11:30:47.909107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.390 [2024-11-20 11:30:47.909135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.390 qpair failed and we were unable to recover it. 00:29:55.390 [2024-11-20 11:30:47.909484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.390 [2024-11-20 11:30:47.909514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.390 qpair failed and we were unable to recover it. 00:29:55.390 [2024-11-20 11:30:47.909876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.390 [2024-11-20 11:30:47.909905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.390 qpair failed and we were unable to recover it. 00:29:55.390 [2024-11-20 11:30:47.910179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.390 [2024-11-20 11:30:47.910210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.390 qpair failed and we were unable to recover it. 00:29:55.390 [2024-11-20 11:30:47.910597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.390 [2024-11-20 11:30:47.910625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.390 qpair failed and we were unable to recover it. 00:29:55.390 [2024-11-20 11:30:47.911000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.390 [2024-11-20 11:30:47.911028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.390 qpair failed and we were unable to recover it. 00:29:55.390 [2024-11-20 11:30:47.911382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.390 [2024-11-20 11:30:47.911412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.390 qpair failed and we were unable to recover it. 00:29:55.390 [2024-11-20 11:30:47.911780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.390 [2024-11-20 11:30:47.911808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.390 qpair failed and we were unable to recover it. 00:29:55.390 [2024-11-20 11:30:47.912177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.390 [2024-11-20 11:30:47.912207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.390 qpair failed and we were unable to recover it. 00:29:55.390 [2024-11-20 11:30:47.912494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.390 [2024-11-20 11:30:47.912522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.390 qpair failed and we were unable to recover it. 00:29:55.390 [2024-11-20 11:30:47.912910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.390 [2024-11-20 11:30:47.912938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.390 qpair failed and we were unable to recover it. 00:29:55.390 [2024-11-20 11:30:47.913181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.390 [2024-11-20 11:30:47.913213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.390 qpair failed and we were unable to recover it. 00:29:55.390 [2024-11-20 11:30:47.913570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.390 [2024-11-20 11:30:47.913599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.390 qpair failed and we were unable to recover it. 00:29:55.390 [2024-11-20 11:30:47.913966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.390 [2024-11-20 11:30:47.913994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.390 qpair failed and we were unable to recover it. 00:29:55.390 [2024-11-20 11:30:47.914343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.390 [2024-11-20 11:30:47.914373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.390 qpair failed and we were unable to recover it. 00:29:55.390 [2024-11-20 11:30:47.914612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.390 [2024-11-20 11:30:47.914644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.390 qpair failed and we were unable to recover it. 00:29:55.390 [2024-11-20 11:30:47.915037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.390 [2024-11-20 11:30:47.915066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.390 qpair failed and we were unable to recover it. 00:29:55.390 [2024-11-20 11:30:47.915437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.390 [2024-11-20 11:30:47.915468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.390 qpair failed and we were unable to recover it. 00:29:55.390 [2024-11-20 11:30:47.915825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.390 [2024-11-20 11:30:47.915854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.390 qpair failed and we were unable to recover it. 00:29:55.390 [2024-11-20 11:30:47.916236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.390 [2024-11-20 11:30:47.916266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.390 qpair failed and we were unable to recover it. 00:29:55.390 [2024-11-20 11:30:47.916481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.390 [2024-11-20 11:30:47.916508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.390 qpair failed and we were unable to recover it. 00:29:55.390 [2024-11-20 11:30:47.916880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.390 [2024-11-20 11:30:47.916908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.390 qpair failed and we were unable to recover it. 00:29:55.390 [2024-11-20 11:30:47.917287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.390 [2024-11-20 11:30:47.917317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.390 qpair failed and we were unable to recover it. 00:29:55.390 [2024-11-20 11:30:47.917685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.390 [2024-11-20 11:30:47.917712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.390 qpair failed and we were unable to recover it. 00:29:55.390 [2024-11-20 11:30:47.917966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.390 [2024-11-20 11:30:47.917997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.390 qpair failed and we were unable to recover it. 00:29:55.390 [2024-11-20 11:30:47.918384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.390 [2024-11-20 11:30:47.918414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.390 qpair failed and we were unable to recover it. 00:29:55.390 [2024-11-20 11:30:47.918692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.390 [2024-11-20 11:30:47.918719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.390 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.919077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.919105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.919521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.919552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.919906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.919934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.920296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.920327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.920718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.920747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.921099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.921128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.921484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.921514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.921907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.921935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.922280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.922315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.922665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.922693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.923017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.923046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.923409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.923439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.923797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.923825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.924201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.924231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.924609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.924637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.925001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.925029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.925394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.925424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.925785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.925814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.926184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.926213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.926559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.926588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.926971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.926999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.927322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.927352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.927737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.927766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.928106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.928135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.928492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.928521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.928906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.928934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.929348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.929378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.929735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.929763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.930130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.930169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.930529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.930559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.930919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.930947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.931281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.931310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.931561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.931591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.931950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.931977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.932334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.932365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.932737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.932767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.933105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.933134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.391 [2024-11-20 11:30:47.933565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.391 [2024-11-20 11:30:47.933595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.391 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.933958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.933986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.934336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.934367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.934776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.934805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.935028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.935058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.935468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.935498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.935933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.935962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.936287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.936318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.936679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.936708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.937051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.937080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.937439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.937469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.937875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.937910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.938282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.938311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.938649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.938678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.938945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.938973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.939310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.939339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.939612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.939641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.939996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.940024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.940410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.940441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.940797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.940825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.941192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.941223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.941579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.941607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.941979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.942008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.942245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.942280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.942644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.942673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.943056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.943085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.943451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.943481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.943802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.943829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.944193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.944222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.944584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.944613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.944979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.945007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.945382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.945412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.945676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.945704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.946039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.946067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.946375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.946406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.946785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.946814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.947177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.947207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.947577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.947606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.392 [2024-11-20 11:30:47.947974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.392 [2024-11-20 11:30:47.948003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.392 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.948386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.948418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.948766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.948795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.949174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.949205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.949544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.949572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.949943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.949971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.950340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.950370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.950741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.950769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.951138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.951174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.951540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.951567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.951956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.951984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.952346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.952376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.952733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.952761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.953067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.953103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.953449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.953478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.953836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.953864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.954111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.954140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.954524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.954554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.954929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.954957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.955341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.955371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.955742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.955771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.956068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.956096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.956486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.956516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.956876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.956906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.957283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.957313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.957681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.957709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.958077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.958105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.958472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.958504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.958784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.958814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.959040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.959071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.959424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.959454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.959820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.959848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.960178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.960209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.960564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.960592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.960935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.960963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.961335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.961365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.961724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.961752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.962017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.962045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.393 [2024-11-20 11:30:47.962403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.393 [2024-11-20 11:30:47.962432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.393 qpair failed and we were unable to recover it. 00:29:55.394 [2024-11-20 11:30:47.962780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.394 [2024-11-20 11:30:47.962808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.394 qpair failed and we were unable to recover it. 00:29:55.394 [2024-11-20 11:30:47.963179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.394 [2024-11-20 11:30:47.963211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.394 qpair failed and we were unable to recover it. 00:29:55.394 [2024-11-20 11:30:47.963461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.394 [2024-11-20 11:30:47.963490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.394 qpair failed and we were unable to recover it. 00:29:55.394 [2024-11-20 11:30:47.963805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.394 [2024-11-20 11:30:47.963833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.394 qpair failed and we were unable to recover it. 00:29:55.394 [2024-11-20 11:30:47.964217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.394 [2024-11-20 11:30:47.964248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.394 qpair failed and we were unable to recover it. 00:29:55.394 [2024-11-20 11:30:47.964617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.394 [2024-11-20 11:30:47.964647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.394 qpair failed and we were unable to recover it. 00:29:55.394 [2024-11-20 11:30:47.964990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.394 [2024-11-20 11:30:47.965018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.394 qpair failed and we were unable to recover it. 00:29:55.394 [2024-11-20 11:30:47.965349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.394 [2024-11-20 11:30:47.965380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.394 qpair failed and we were unable to recover it. 00:29:55.394 [2024-11-20 11:30:47.965725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.394 [2024-11-20 11:30:47.965753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.394 qpair failed and we were unable to recover it. 00:29:55.394 [2024-11-20 11:30:47.966117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.394 [2024-11-20 11:30:47.966146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.394 qpair failed and we were unable to recover it. 00:29:55.394 [2024-11-20 11:30:47.966526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.394 [2024-11-20 11:30:47.966555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.394 qpair failed and we were unable to recover it. 00:29:55.394 [2024-11-20 11:30:47.966933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.394 [2024-11-20 11:30:47.966960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.394 qpair failed and we were unable to recover it. 00:29:55.394 [2024-11-20 11:30:47.967322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.394 [2024-11-20 11:30:47.967351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.394 qpair failed and we were unable to recover it. 00:29:55.394 [2024-11-20 11:30:47.967735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.394 [2024-11-20 11:30:47.967764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.394 qpair failed and we were unable to recover it. 00:29:55.394 [2024-11-20 11:30:47.968125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.394 [2024-11-20 11:30:47.968175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.394 qpair failed and we were unable to recover it. 00:29:55.394 [2024-11-20 11:30:47.968551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.394 [2024-11-20 11:30:47.968580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.394 qpair failed and we were unable to recover it. 00:29:55.394 [2024-11-20 11:30:47.968958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.394 [2024-11-20 11:30:47.968986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.394 qpair failed and we were unable to recover it. 00:29:55.394 [2024-11-20 11:30:47.969346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.394 [2024-11-20 11:30:47.969377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.394 qpair failed and we were unable to recover it. 00:29:55.394 [2024-11-20 11:30:47.969709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.394 [2024-11-20 11:30:47.969737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.394 qpair failed and we were unable to recover it. 00:29:55.394 [2024-11-20 11:30:47.970104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.394 [2024-11-20 11:30:47.970132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.394 qpair failed and we were unable to recover it. 00:29:55.394 [2024-11-20 11:30:47.970495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.394 [2024-11-20 11:30:47.970524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.394 qpair failed and we were unable to recover it. 00:29:55.394 [2024-11-20 11:30:47.970903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.394 [2024-11-20 11:30:47.970931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.394 qpair failed and we were unable to recover it. 00:29:55.394 [2024-11-20 11:30:47.971319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.394 [2024-11-20 11:30:47.971349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.394 qpair failed and we were unable to recover it. 00:29:55.394 [2024-11-20 11:30:47.971778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.394 [2024-11-20 11:30:47.971807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.394 qpair failed and we were unable to recover it. 00:29:55.394 [2024-11-20 11:30:47.972204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.394 [2024-11-20 11:30:47.972235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.394 qpair failed and we were unable to recover it. 00:29:55.394 [2024-11-20 11:30:47.972603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.394 [2024-11-20 11:30:47.972632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.394 qpair failed and we were unable to recover it. 00:29:55.394 [2024-11-20 11:30:47.972998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.394 [2024-11-20 11:30:47.973027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.394 qpair failed and we were unable to recover it. 00:29:55.394 [2024-11-20 11:30:47.973301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.394 [2024-11-20 11:30:47.973331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.394 qpair failed and we were unable to recover it. 00:29:55.394 [2024-11-20 11:30:47.973700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.394 [2024-11-20 11:30:47.973730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.394 qpair failed and we were unable to recover it. 00:29:55.394 [2024-11-20 11:30:47.974099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.394 [2024-11-20 11:30:47.974127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.394 qpair failed and we were unable to recover it. 00:29:55.394 [2024-11-20 11:30:47.974481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.394 [2024-11-20 11:30:47.974512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.394 qpair failed and we were unable to recover it. 00:29:55.394 [2024-11-20 11:30:47.974888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.394 [2024-11-20 11:30:47.974916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.394 qpair failed and we were unable to recover it. 00:29:55.394 [2024-11-20 11:30:47.975186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.394 [2024-11-20 11:30:47.975215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.394 qpair failed and we were unable to recover it. 00:29:55.394 [2024-11-20 11:30:47.975591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.975620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.975987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.976016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.976391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.976420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.976821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.976850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.977107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.977137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.977507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.977537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.977878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.977915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.978284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.978315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.978691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.978721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.979086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.979115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.979527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.979557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.979916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.979944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.980302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.980331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.980665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.980694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.981057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.981086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.981360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.981390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.981769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.981799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.982237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.982268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.982613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.982643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.983024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.983053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.983461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.983491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.983853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.983886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.984234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.984265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.984621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.984651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.985015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.985043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.985439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.985469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.985829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.985857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.986215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.986244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.986555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.986582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.986962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.986991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.987241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.987270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.987671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.987700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.988069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.988097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.988461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.988491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.988861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.988889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.989271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.989302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.989678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.989707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.990071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.395 [2024-11-20 11:30:47.990101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.395 qpair failed and we were unable to recover it. 00:29:55.395 [2024-11-20 11:30:47.990478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:47.990508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:47.990863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:47.990892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:47.991145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:47.991185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:47.991555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:47.991583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:47.991950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:47.991979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:47.992367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:47.992397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:47.992764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:47.992793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:47.993185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:47.993215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:47.993575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:47.993604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:47.993961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:47.993990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:47.994381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:47.994413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:47.994816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:47.994845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:47.995211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:47.995240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:47.995609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:47.995638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:47.996030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:47.996057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:47.996421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:47.996450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:47.996832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:47.996860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:47.997238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:47.997266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:47.997655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:47.997683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:47.997977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:47.998005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:47.998382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:47.998411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:47.998777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:47.998807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:47.999182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:47.999212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:47.999530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:47.999565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:47.999997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:48.000026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:48.000386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:48.000417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:48.000680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:48.000711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:48.001098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:48.001127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:48.001517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:48.001548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:48.001962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:48.001992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:48.002344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:48.002374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:48.002587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:48.002615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:48.002985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:48.003013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:48.003386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:48.003416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:48.003777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:48.003806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:48.004180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:48.004212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:48.004574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:48.004602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:48.004962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.396 [2024-11-20 11:30:48.004991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.396 qpair failed and we were unable to recover it. 00:29:55.396 [2024-11-20 11:30:48.005362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.005392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.005767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.005796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.006224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.006255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.006533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.006561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.006929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.006958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.007321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.007350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.007705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.007733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.008178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.008208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.008570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.008599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.008977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.009005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.009352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.009384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.009738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.009767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.010151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.010197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.010537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.010568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.010907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.010936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.011303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.011332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.011701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.011730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.012098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.012127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.012494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.012523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.013005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.013033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.013397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.013428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.013800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.013828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.014187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.014218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.014623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.014651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.015019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.015048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.015392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.015421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.015636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.015664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.015983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.016012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.016394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.016424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.016783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.016810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.017195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.017225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.017608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.017636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.017997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.018024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.018388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.018417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.018671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.018699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.019042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.019071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.019426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.397 [2024-11-20 11:30:48.019456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.397 qpair failed and we were unable to recover it. 00:29:55.397 [2024-11-20 11:30:48.019841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.019870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.020230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.020259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.020514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.020542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.020917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.020945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.021319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.021350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.021696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.021724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.022088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.022117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.022392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.022420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.022806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.022834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.023155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.023195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.023526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.023554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.023921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.023951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.024314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.024343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.024712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.024742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.025114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.025142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.025523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.025559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.025925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.025952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.026298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.026328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.026698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.026725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.027173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.027201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.027600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.027628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.027867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.027893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.028222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.028251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.028615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.028642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.029044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.029071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.029406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.029435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.029785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.029813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.030222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.030252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.030540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.030569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.030946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.030976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.031338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.031368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.031734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.031764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.032094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.032124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.032490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.032521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.032925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.032955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.033346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.033377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.033755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.033784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.398 [2024-11-20 11:30:48.034152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.398 [2024-11-20 11:30:48.034214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.398 qpair failed and we were unable to recover it. 00:29:55.399 [2024-11-20 11:30:48.034595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.399 [2024-11-20 11:30:48.034625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.399 qpair failed and we were unable to recover it. 00:29:55.399 [2024-11-20 11:30:48.034875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.399 [2024-11-20 11:30:48.034905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.399 qpair failed and we were unable to recover it. 00:29:55.399 [2024-11-20 11:30:48.035269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.399 [2024-11-20 11:30:48.035299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.399 qpair failed and we were unable to recover it. 00:29:55.399 [2024-11-20 11:30:48.035639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.399 [2024-11-20 11:30:48.035669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.399 qpair failed and we were unable to recover it. 00:29:55.399 [2024-11-20 11:30:48.036031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.399 [2024-11-20 11:30:48.036061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.399 qpair failed and we were unable to recover it. 00:29:55.399 [2024-11-20 11:30:48.036411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.399 [2024-11-20 11:30:48.036442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.399 qpair failed and we were unable to recover it. 00:29:55.399 [2024-11-20 11:30:48.036772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.399 [2024-11-20 11:30:48.036801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.399 qpair failed and we were unable to recover it. 00:29:55.399 [2024-11-20 11:30:48.037045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.399 [2024-11-20 11:30:48.037078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.399 qpair failed and we were unable to recover it. 00:29:55.399 [2024-11-20 11:30:48.037419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.399 [2024-11-20 11:30:48.037450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.399 qpair failed and we were unable to recover it. 00:29:55.399 [2024-11-20 11:30:48.037784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.399 [2024-11-20 11:30:48.037813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.399 qpair failed and we were unable to recover it. 00:29:55.399 [2024-11-20 11:30:48.038153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.399 [2024-11-20 11:30:48.038194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.399 qpair failed and we were unable to recover it. 00:29:55.399 [2024-11-20 11:30:48.038603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.399 [2024-11-20 11:30:48.038632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.399 qpair failed and we were unable to recover it. 00:29:55.399 [2024-11-20 11:30:48.038998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.399 [2024-11-20 11:30:48.039027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.399 qpair failed and we were unable to recover it. 00:29:55.399 [2024-11-20 11:30:48.039394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.399 [2024-11-20 11:30:48.039425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.399 qpair failed and we were unable to recover it. 00:29:55.399 [2024-11-20 11:30:48.039833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.399 [2024-11-20 11:30:48.039862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.399 qpair failed and we were unable to recover it. 00:29:55.399 [2024-11-20 11:30:48.040231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.399 [2024-11-20 11:30:48.040263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.399 qpair failed and we were unable to recover it. 00:29:55.399 [2024-11-20 11:30:48.040626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.399 [2024-11-20 11:30:48.040656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.399 qpair failed and we were unable to recover it. 00:29:55.399 [2024-11-20 11:30:48.041009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.399 [2024-11-20 11:30:48.041045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.399 qpair failed and we were unable to recover it. 00:29:55.399 [2024-11-20 11:30:48.041379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.399 [2024-11-20 11:30:48.041411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.399 qpair failed and we were unable to recover it. 00:29:55.399 [2024-11-20 11:30:48.041774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.399 [2024-11-20 11:30:48.041804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.399 qpair failed and we were unable to recover it. 00:29:55.399 [2024-11-20 11:30:48.042175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.399 [2024-11-20 11:30:48.042206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.399 qpair failed and we were unable to recover it. 00:29:55.399 [2024-11-20 11:30:48.042613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.399 [2024-11-20 11:30:48.042642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.399 qpair failed and we were unable to recover it. 00:29:55.399 [2024-11-20 11:30:48.042882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.399 [2024-11-20 11:30:48.042915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.399 qpair failed and we were unable to recover it. 00:29:55.399 [2024-11-20 11:30:48.043179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.399 [2024-11-20 11:30:48.043211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.399 qpair failed and we were unable to recover it. 00:29:55.399 [2024-11-20 11:30:48.043574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.399 [2024-11-20 11:30:48.043604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.399 qpair failed and we were unable to recover it. 00:29:55.399 [2024-11-20 11:30:48.043864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.399 [2024-11-20 11:30:48.043892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.399 qpair failed and we were unable to recover it. 00:29:55.399 [2024-11-20 11:30:48.044243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.399 [2024-11-20 11:30:48.044275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.399 qpair failed and we were unable to recover it. 00:29:55.399 [2024-11-20 11:30:48.044542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.399 [2024-11-20 11:30:48.044573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.399 qpair failed and we were unable to recover it. 00:29:55.399 [2024-11-20 11:30:48.044845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.399 [2024-11-20 11:30:48.044875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.399 qpair failed and we were unable to recover it. 00:29:55.399 [2024-11-20 11:30:48.045232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.399 [2024-11-20 11:30:48.045262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.399 qpair failed and we were unable to recover it. 00:29:55.399 [2024-11-20 11:30:48.045664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.399 [2024-11-20 11:30:48.045693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.399 qpair failed and we were unable to recover it. 00:29:55.399 [2024-11-20 11:30:48.046056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.399 [2024-11-20 11:30:48.046086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.399 qpair failed and we were unable to recover it. 00:29:55.399 [2024-11-20 11:30:48.046447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.399 [2024-11-20 11:30:48.046478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.399 qpair failed and we were unable to recover it. 00:29:55.399 [2024-11-20 11:30:48.046841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.399 [2024-11-20 11:30:48.046870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.399 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.047247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.047277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.047686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.047716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.048090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.048119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.048489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.048520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.048877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.048907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.049179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.049210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.049582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.049611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.049851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.049880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.050276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.050306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.050680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.050709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.051084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.051114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.051482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.051512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.051875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.051903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.052271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.052300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.052662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.052691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.053054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.053083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.053458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.053487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.053867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.053896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.054129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.054157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.054532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.054560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.054791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.054819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.055182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.055212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.055674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.055702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.056067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.056101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.056511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.056541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.056971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.056999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.057245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.057278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.057638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.057666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.058024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.058060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.058392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.058423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.058783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.058812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.059170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.059200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.059552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.059580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.059935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.059963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.060327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.060357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.060691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.060719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.061092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.061119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.400 [2024-11-20 11:30:48.061291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.400 [2024-11-20 11:30:48.061321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.400 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.061689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.061716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.062072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.062099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.062461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.062490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.062783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.062810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.063180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.063209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.063574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.063601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.063861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.063889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.064241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.064270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.064648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.064675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.064919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.064947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.065187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.065216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.065565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.065592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.065969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.065996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.066146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.066188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.066548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.066576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.066974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.067001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.067442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.067470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.067831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.067859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.068106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.068133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.068514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.068543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.068777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0ce00 is same with the state(6) to be set 00:29:55.401 [2024-11-20 11:30:48.069491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.069551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.069947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.069960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.070331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.070345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.070532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.070543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.070948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.070958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.071169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.071180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.071630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.071690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.072036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.072050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.072401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.072460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.072823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.072837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.073359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.073419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.073802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.073817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.074181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.074193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.074533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.074545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.074888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.074900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.075230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.075243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.075577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.075588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.401 qpair failed and we were unable to recover it. 00:29:55.401 [2024-11-20 11:30:48.075945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.401 [2024-11-20 11:30:48.075956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.076265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.076283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.076592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.076605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.076934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.076945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.077277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.077290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.077610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.077622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.077967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.077977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.078334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.078345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.078564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.078575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.078936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.078949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.079280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.079291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.079617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.079628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.079976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.079988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.080366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.080378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.080684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.080695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.081045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.081058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.081302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.081314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.081622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.081633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.081931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.081943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.082289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.082300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.082617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.082629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.082979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.082990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.083338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.083352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.083697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.083708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.084095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.084107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.084393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.084405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.084752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.084763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.085094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.085105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.085513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.085525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.085827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.085838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.086039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.086051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.086368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.086381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.086762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.086772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.087097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.087108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.087328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.087342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.087682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.087694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.087922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.087934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.088223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.088236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.402 [2024-11-20 11:30:48.088460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.402 [2024-11-20 11:30:48.088470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.402 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-20 11:30:48.088789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-20 11:30:48.088801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-20 11:30:48.089148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-20 11:30:48.089166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-20 11:30:48.089489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-20 11:30:48.089501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-20 11:30:48.089859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-20 11:30:48.089874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-20 11:30:48.090209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-20 11:30:48.090221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-20 11:30:48.090447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-20 11:30:48.090458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-20 11:30:48.090659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-20 11:30:48.090670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-20 11:30:48.091030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-20 11:30:48.091041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-20 11:30:48.091388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-20 11:30:48.091400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-20 11:30:48.091610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-20 11:30:48.091623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-20 11:30:48.091953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-20 11:30:48.091965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-20 11:30:48.092294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-20 11:30:48.092305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-20 11:30:48.092619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-20 11:30:48.092631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-20 11:30:48.092977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-20 11:30:48.092989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-20 11:30:48.093325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-20 11:30:48.093338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-20 11:30:48.093675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-20 11:30:48.093687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-20 11:30:48.094015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-20 11:30:48.094027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-20 11:30:48.094371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-20 11:30:48.094384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-20 11:30:48.094741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-20 11:30:48.094752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-20 11:30:48.094971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-20 11:30:48.094982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-20 11:30:48.095342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-20 11:30:48.095354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-20 11:30:48.095675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-20 11:30:48.095688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-20 11:30:48.096011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-20 11:30:48.096021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-20 11:30:48.096289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-20 11:30:48.096300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-20 11:30:48.096520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-20 11:30:48.096531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-20 11:30:48.096869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-20 11:30:48.096881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-20 11:30:48.097092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-20 11:30:48.097106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-20 11:30:48.097457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-20 11:30:48.097470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-20 11:30:48.097790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-20 11:30:48.097802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-20 11:30:48.098152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-20 11:30:48.098169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-20 11:30:48.098588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-20 11:30:48.098600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-20 11:30:48.098920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-20 11:30:48.098933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-20 11:30:48.099145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-20 11:30:48.099156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-20 11:30:48.099529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.403 [2024-11-20 11:30:48.099542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.403 qpair failed and we were unable to recover it. 00:29:55.403 [2024-11-20 11:30:48.099859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-20 11:30:48.099870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-20 11:30:48.100271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-20 11:30:48.100283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-20 11:30:48.100619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-20 11:30:48.100632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-20 11:30:48.100975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-20 11:30:48.100987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-20 11:30:48.101330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-20 11:30:48.101343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-20 11:30:48.101673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-20 11:30:48.101684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-20 11:30:48.101884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-20 11:30:48.101902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-20 11:30:48.102255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-20 11:30:48.102267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-20 11:30:48.102617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-20 11:30:48.102628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-20 11:30:48.103010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-20 11:30:48.103020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-20 11:30:48.103329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-20 11:30:48.103342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-20 11:30:48.103651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-20 11:30:48.103663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-20 11:30:48.104011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-20 11:30:48.104022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-20 11:30:48.104380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-20 11:30:48.104392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-20 11:30:48.104724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-20 11:30:48.104736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-20 11:30:48.105074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-20 11:30:48.105085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-20 11:30:48.105427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-20 11:30:48.105438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-20 11:30:48.105787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-20 11:30:48.105799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-20 11:30:48.106123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-20 11:30:48.106135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-20 11:30:48.106502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-20 11:30:48.106514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-20 11:30:48.106832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-20 11:30:48.106843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-20 11:30:48.107198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-20 11:30:48.107211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-20 11:30:48.107504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-20 11:30:48.107515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-20 11:30:48.107845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.404 [2024-11-20 11:30:48.107857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.404 qpair failed and we were unable to recover it. 00:29:55.404 [2024-11-20 11:30:48.108208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.983 [2024-11-20 11:30:48.535153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-11-20 11:30:48.535817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.983 [2024-11-20 11:30:48.535920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-11-20 11:30:48.536489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.983 [2024-11-20 11:30:48.536595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-11-20 11:30:48.537048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.983 [2024-11-20 11:30:48.537090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-11-20 11:30:48.537561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.983 [2024-11-20 11:30:48.537670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-11-20 11:30:48.538031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.983 [2024-11-20 11:30:48.538070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-11-20 11:30:48.538453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.983 [2024-11-20 11:30:48.538487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-11-20 11:30:48.538909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.983 [2024-11-20 11:30:48.538942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-11-20 11:30:48.539493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.983 [2024-11-20 11:30:48.539602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-11-20 11:30:48.540054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.983 [2024-11-20 11:30:48.540094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-11-20 11:30:48.540576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.983 [2024-11-20 11:30:48.540611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-11-20 11:30:48.540938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.983 [2024-11-20 11:30:48.540970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-11-20 11:30:48.541223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.983 [2024-11-20 11:30:48.541259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-11-20 11:30:48.541651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.983 [2024-11-20 11:30:48.541698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-11-20 11:30:48.542040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.983 [2024-11-20 11:30:48.542074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-11-20 11:30:48.542423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.983 [2024-11-20 11:30:48.542455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-11-20 11:30:48.542813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.983 [2024-11-20 11:30:48.542850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-11-20 11:30:48.543002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.983 [2024-11-20 11:30:48.543037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-11-20 11:30:48.543419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.983 [2024-11-20 11:30:48.543454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-11-20 11:30:48.543856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.983 [2024-11-20 11:30:48.543887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-11-20 11:30:48.544248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.983 [2024-11-20 11:30:48.544281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-11-20 11:30:48.544653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.983 [2024-11-20 11:30:48.544685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-11-20 11:30:48.545052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.983 [2024-11-20 11:30:48.545083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-11-20 11:30:48.545350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.983 [2024-11-20 11:30:48.545384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-11-20 11:30:48.545742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.545773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.546182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.546215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.546571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.546602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.546993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.547024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.547391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.547424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.547772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.547803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.548140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.548182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.548429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.548461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.548802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.548834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.549266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.549298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.549662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.549694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.550057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.550088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.550453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.550487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.550847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.550878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.551241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.551274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.551640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.551672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.552029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.552059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.552443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.552481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.552829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.552862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.553203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.553236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.553641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.553671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.553927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.553958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.554314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.554347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.554589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.554620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.554989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.555021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.555447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.555479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.555837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.555867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.556115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.556151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.556541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.556571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.556930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.556961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.557347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.557380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.557755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.557785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.558170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.558203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.558571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.558603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.558963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.558995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.559233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.559264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.559616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.559647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.560001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.560032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.984 qpair failed and we were unable to recover it. 00:29:55.984 [2024-11-20 11:30:48.560385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.984 [2024-11-20 11:30:48.560417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.560756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.560785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.561223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.561255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.561540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.561571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.561932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.561962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.562212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.562247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.562618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.562649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.563011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.563041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.563267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.563300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.563652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.563684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.564042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.564073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.564443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.564475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.564827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.564859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.565185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.565217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.565596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.565628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.565987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.566018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.566343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.566375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.566736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.566768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.567119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.567149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.567512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.567551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.567931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.567962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.568317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.568349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.568705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.568736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.569086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.569117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.569495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.569528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.569882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.569913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.570276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.570308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.570671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.570702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.571067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.571097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.571535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.571566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.571916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.571947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.572303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.572335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.572683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.572714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.573064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.573096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.573456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.573489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.573852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.573886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.574247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.574282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.574644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.574678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.985 [2024-11-20 11:30:48.575037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.985 [2024-11-20 11:30:48.575069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.985 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.575399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.575431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.575786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.575816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.576182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.576214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.576572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.576605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.576981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.577012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.577343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.577376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.577731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.577762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.578125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.578156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.578558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.578593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.578947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.578978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.579340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.579374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.579737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.579769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.580130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.580173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.580548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.580579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.580946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.580978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.581334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.581366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.581721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.581752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.582111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.582144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.582512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.582543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.582905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.582937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.583298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.583331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.583699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.583737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.584093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.584123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.584481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.584513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.584867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.584898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.585255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.585288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.585647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.585679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.586025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.586057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.586418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.586451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.586783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.586816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.587151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.587196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.587553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.587583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.587958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.587988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.588344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.588376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.588737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.588768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.589141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.589186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.589589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.589620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.589987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.986 [2024-11-20 11:30:48.590020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.986 qpair failed and we were unable to recover it. 00:29:55.986 [2024-11-20 11:30:48.590384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-20 11:30:48.590418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-20 11:30:48.590772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-20 11:30:48.590802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-20 11:30:48.591075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-20 11:30:48.591105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-20 11:30:48.591462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-20 11:30:48.591494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-20 11:30:48.591849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-20 11:30:48.591883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-20 11:30:48.592242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-20 11:30:48.592275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-20 11:30:48.592515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-20 11:30:48.592549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-20 11:30:48.592906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-20 11:30:48.592938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-20 11:30:48.593325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-20 11:30:48.593358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-20 11:30:48.593723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-20 11:30:48.593754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-20 11:30:48.594109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-20 11:30:48.594146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-20 11:30:48.594733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-20 11:30:48.594774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-20 11:30:48.595185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-20 11:30:48.595223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-20 11:30:48.595604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-20 11:30:48.595635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-20 11:30:48.596006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-20 11:30:48.596037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-20 11:30:48.596390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-20 11:30:48.596422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-20 11:30:48.596665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-20 11:30:48.596699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-20 11:30:48.597049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-20 11:30:48.597081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-20 11:30:48.597440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-20 11:30:48.597471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-20 11:30:48.597826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-20 11:30:48.597858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-20 11:30:48.598236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-20 11:30:48.598269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-20 11:30:48.598632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-20 11:30:48.598662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-20 11:30:48.599020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-20 11:30:48.599050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-20 11:30:48.599423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-20 11:30:48.599460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-20 11:30:48.599842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-20 11:30:48.599875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-20 11:30:48.600230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-20 11:30:48.600264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-20 11:30:48.600494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-20 11:30:48.600524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-20 11:30:48.600893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-20 11:30:48.600924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-20 11:30:48.601326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-20 11:30:48.601358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-20 11:30:48.601723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-20 11:30:48.601754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-20 11:30:48.602126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-20 11:30:48.602169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-20 11:30:48.602568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-20 11:30:48.602599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.987 [2024-11-20 11:30:48.602946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.987 [2024-11-20 11:30:48.602977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.987 qpair failed and we were unable to recover it. 00:29:55.988 [2024-11-20 11:30:48.603220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.988 [2024-11-20 11:30:48.603251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.988 qpair failed and we were unable to recover it. 00:29:55.988 [2024-11-20 11:30:48.603497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.988 [2024-11-20 11:30:48.603530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.988 qpair failed and we were unable to recover it. 00:29:55.988 [2024-11-20 11:30:48.603883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.988 [2024-11-20 11:30:48.603917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.988 qpair failed and we were unable to recover it. 00:29:55.988 [2024-11-20 11:30:48.604286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.988 [2024-11-20 11:30:48.604321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.988 qpair failed and we were unable to recover it. 00:29:55.988 [2024-11-20 11:30:48.604685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.988 [2024-11-20 11:30:48.604718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.988 qpair failed and we were unable to recover it. 00:29:55.988 [2024-11-20 11:30:48.604966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.988 [2024-11-20 11:30:48.604996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.988 qpair failed and we were unable to recover it. 00:29:55.988 [2024-11-20 11:30:48.605428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.988 [2024-11-20 11:30:48.605461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.988 qpair failed and we were unable to recover it. 00:29:55.988 [2024-11-20 11:30:48.605810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.988 [2024-11-20 11:30:48.605841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.988 qpair failed and we were unable to recover it. 00:29:55.988 [2024-11-20 11:30:48.606074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.988 [2024-11-20 11:30:48.606106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.988 qpair failed and we were unable to recover it. 00:29:55.988 [2024-11-20 11:30:48.606557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.988 [2024-11-20 11:30:48.606591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.988 qpair failed and we were unable to recover it. 00:29:55.988 [2024-11-20 11:30:48.606959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.988 [2024-11-20 11:30:48.606989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.988 qpair failed and we were unable to recover it. 00:29:55.988 [2024-11-20 11:30:48.607343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.988 [2024-11-20 11:30:48.607377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.988 qpair failed and we were unable to recover it. 00:29:55.988 [2024-11-20 11:30:48.607745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.988 [2024-11-20 11:30:48.607776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.988 qpair failed and we were unable to recover it. 00:29:55.988 [2024-11-20 11:30:48.608143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.988 [2024-11-20 11:30:48.608188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.988 qpair failed and we were unable to recover it. 00:29:55.988 [2024-11-20 11:30:48.608557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.988 [2024-11-20 11:30:48.608589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.988 qpair failed and we were unable to recover it. 00:29:55.988 [2024-11-20 11:30:48.608953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.988 [2024-11-20 11:30:48.608984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.988 qpair failed and we were unable to recover it. 00:29:55.988 [2024-11-20 11:30:48.609347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.988 [2024-11-20 11:30:48.609379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.988 qpair failed and we were unable to recover it. 00:29:55.988 [2024-11-20 11:30:48.609744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.988 [2024-11-20 11:30:48.609776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.988 qpair failed and we were unable to recover it. 00:29:55.988 [2024-11-20 11:30:48.610132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.988 [2024-11-20 11:30:48.610180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.988 qpair failed and we were unable to recover it. 00:29:55.988 [2024-11-20 11:30:48.610508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.988 [2024-11-20 11:30:48.610539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.988 qpair failed and we were unable to recover it. 00:29:55.988 [2024-11-20 11:30:48.610902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.988 [2024-11-20 11:30:48.610935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.988 qpair failed and we were unable to recover it. 00:29:55.988 [2024-11-20 11:30:48.611286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.988 [2024-11-20 11:30:48.611321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.988 qpair failed and we were unable to recover it. 00:29:55.988 [2024-11-20 11:30:48.611668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.988 [2024-11-20 11:30:48.611698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.988 qpair failed and we were unable to recover it. 00:29:55.988 [2024-11-20 11:30:48.611930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.988 [2024-11-20 11:30:48.611964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.988 qpair failed and we were unable to recover it. 00:29:55.988 [2024-11-20 11:30:48.612314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.988 [2024-11-20 11:30:48.612348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.988 qpair failed and we were unable to recover it. 00:29:55.988 [2024-11-20 11:30:48.612707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.988 [2024-11-20 11:30:48.612740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.988 qpair failed and we were unable to recover it. 00:29:55.988 [2024-11-20 11:30:48.613084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.988 [2024-11-20 11:30:48.613116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.988 qpair failed and we were unable to recover it. 00:29:55.988 [2024-11-20 11:30:48.613485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.988 [2024-11-20 11:30:48.613517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.988 qpair failed and we were unable to recover it. 00:29:55.988 [2024-11-20 11:30:48.613876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.988 [2024-11-20 11:30:48.613908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.988 qpair failed and we were unable to recover it. 00:29:55.988 [2024-11-20 11:30:48.614263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.988 [2024-11-20 11:30:48.614295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.988 qpair failed and we were unable to recover it. 00:29:55.988 [2024-11-20 11:30:48.614660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.988 [2024-11-20 11:30:48.614690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.988 qpair failed and we were unable to recover it. 00:29:55.988 [2024-11-20 11:30:48.615052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.988 [2024-11-20 11:30:48.615083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.988 qpair failed and we were unable to recover it. 00:29:55.988 [2024-11-20 11:30:48.615445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.615481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.615916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.615945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.616298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.616331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.616692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.616725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.617072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.617102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.617340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.617376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.617761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.617793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.618049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.618081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.618408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.618440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.618799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.618830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.619186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.619219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.619578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.619608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.619969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.620000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.620346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.620388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.620770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.620802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.621173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.621207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.621577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.621608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.621958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.621991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.622347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.622379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.622745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.622776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.623137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.623178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.623527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.623558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.623917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.623949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.624312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.624343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.624712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.624747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.625117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.625152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.625529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.625561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.625917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.625949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.626316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.626350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.626712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.626743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.627096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.627128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.627528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.627558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.627787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.627820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.628177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.628209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.628595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.628628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.628988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.629019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.629399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.989 [2024-11-20 11:30:48.629434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.989 qpair failed and we were unable to recover it. 00:29:55.989 [2024-11-20 11:30:48.629787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.990 [2024-11-20 11:30:48.629817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.990 qpair failed and we were unable to recover it. 00:29:55.990 [2024-11-20 11:30:48.630207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.990 [2024-11-20 11:30:48.630241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.990 qpair failed and we were unable to recover it. 00:29:55.990 [2024-11-20 11:30:48.630519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.990 [2024-11-20 11:30:48.630549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.990 qpair failed and we were unable to recover it. 00:29:55.990 [2024-11-20 11:30:48.630894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.990 [2024-11-20 11:30:48.630924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.990 qpair failed and we were unable to recover it. 00:29:55.990 [2024-11-20 11:30:48.631285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.990 [2024-11-20 11:30:48.631319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.990 qpair failed and we were unable to recover it. 00:29:55.990 [2024-11-20 11:30:48.631675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.990 [2024-11-20 11:30:48.631707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.990 qpair failed and we were unable to recover it. 00:29:55.990 [2024-11-20 11:30:48.632074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.990 [2024-11-20 11:30:48.632106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.990 qpair failed and we were unable to recover it. 00:29:55.990 [2024-11-20 11:30:48.632494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.990 [2024-11-20 11:30:48.632527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.990 qpair failed and we were unable to recover it. 00:29:55.990 [2024-11-20 11:30:48.632885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.990 [2024-11-20 11:30:48.632916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.990 qpair failed and we were unable to recover it. 00:29:55.990 [2024-11-20 11:30:48.634953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.990 [2024-11-20 11:30:48.635016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.990 qpair failed and we were unable to recover it. 00:29:55.990 [2024-11-20 11:30:48.635408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.990 [2024-11-20 11:30:48.635444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.990 qpair failed and we were unable to recover it. 00:29:55.990 [2024-11-20 11:30:48.635826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.990 [2024-11-20 11:30:48.635857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.990 qpair failed and we were unable to recover it. 00:29:55.990 [2024-11-20 11:30:48.636222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.990 [2024-11-20 11:30:48.636255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.990 qpair failed and we were unable to recover it. 00:29:55.990 [2024-11-20 11:30:48.636646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.990 [2024-11-20 11:30:48.636678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.990 qpair failed and we were unable to recover it. 00:29:55.990 [2024-11-20 11:30:48.637029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.990 [2024-11-20 11:30:48.637062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.990 qpair failed and we were unable to recover it. 00:29:55.990 [2024-11-20 11:30:48.637501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.990 [2024-11-20 11:30:48.637536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.990 qpair failed and we were unable to recover it. 00:29:55.990 [2024-11-20 11:30:48.637917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.990 [2024-11-20 11:30:48.637950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.990 qpair failed and we were unable to recover it. 00:29:55.990 [2024-11-20 11:30:48.638295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.990 [2024-11-20 11:30:48.638335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.990 qpair failed and we were unable to recover it. 00:29:55.990 [2024-11-20 11:30:48.638699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.990 [2024-11-20 11:30:48.638730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.990 qpair failed and we were unable to recover it. 00:29:55.990 [2024-11-20 11:30:48.639131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.990 [2024-11-20 11:30:48.639196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.990 qpair failed and we were unable to recover it. 00:29:55.990 [2024-11-20 11:30:48.639547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.990 [2024-11-20 11:30:48.639579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.990 qpair failed and we were unable to recover it. 00:29:55.990 [2024-11-20 11:30:48.639930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.990 [2024-11-20 11:30:48.639961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.990 qpair failed and we were unable to recover it. 00:29:55.990 [2024-11-20 11:30:48.640333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.990 [2024-11-20 11:30:48.640367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.990 qpair failed and we were unable to recover it. 00:29:55.990 [2024-11-20 11:30:48.640665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.990 [2024-11-20 11:30:48.640695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.990 qpair failed and we were unable to recover it. 00:29:55.990 [2024-11-20 11:30:48.641064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.990 [2024-11-20 11:30:48.641094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.990 qpair failed and we were unable to recover it. 00:29:55.990 [2024-11-20 11:30:48.641358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.990 [2024-11-20 11:30:48.641391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.990 qpair failed and we were unable to recover it. 00:29:55.990 [2024-11-20 11:30:48.641723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.990 [2024-11-20 11:30:48.641754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.990 qpair failed and we were unable to recover it. 00:29:55.990 [2024-11-20 11:30:48.642113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.990 [2024-11-20 11:30:48.642144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.990 qpair failed and we were unable to recover it. 00:29:55.990 [2024-11-20 11:30:48.642529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.990 [2024-11-20 11:30:48.642560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.990 qpair failed and we were unable to recover it. 00:29:55.990 [2024-11-20 11:30:48.642933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.990 [2024-11-20 11:30:48.642965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.990 qpair failed and we were unable to recover it. 00:29:55.990 [2024-11-20 11:30:48.643304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.990 [2024-11-20 11:30:48.643337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.990 qpair failed and we were unable to recover it. 00:29:55.990 [2024-11-20 11:30:48.643616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.990 [2024-11-20 11:30:48.643649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.990 qpair failed and we were unable to recover it. 00:29:55.990 [2024-11-20 11:30:48.643996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.990 [2024-11-20 11:30:48.644027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.990 qpair failed and we were unable to recover it. 00:29:55.990 [2024-11-20 11:30:48.644364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.990 [2024-11-20 11:30:48.644398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.990 qpair failed and we were unable to recover it. 00:29:55.990 [2024-11-20 11:30:48.644815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.990 [2024-11-20 11:30:48.644846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.990 qpair failed and we were unable to recover it. 00:29:55.990 [2024-11-20 11:30:48.645203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.645236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.645625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.645719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.646071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.646104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.646478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.646511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.646881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.646911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.647259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.647291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.647636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.647667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.648042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.648072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.648430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.648464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.648819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.648856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.649219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.649251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.649622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.649656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.649940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.649971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.650349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.650381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.650754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.650785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.651130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.651175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.651534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.651564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.651933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.651964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.652314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.652347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.652698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.652731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.653056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.653088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.653463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.653497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.653855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.653886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.654152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.654208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.654569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.654599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.654944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.654975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.655317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.655352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.655717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.655748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.656112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.656143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.656498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.656530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.656888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.656919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.657296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.657328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.657664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.657696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.658034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.658064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.658435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.658467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.658825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.658856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.659255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.659289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.991 [2024-11-20 11:30:48.659674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.991 [2024-11-20 11:30:48.659706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.991 qpair failed and we were unable to recover it. 00:29:55.992 [2024-11-20 11:30:48.660076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.992 [2024-11-20 11:30:48.660107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.992 qpair failed and we were unable to recover it. 00:29:55.992 [2024-11-20 11:30:48.660370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.992 [2024-11-20 11:30:48.660402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.992 qpair failed and we were unable to recover it. 00:29:55.992 [2024-11-20 11:30:48.660764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.992 [2024-11-20 11:30:48.660795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.992 qpair failed and we were unable to recover it. 00:29:55.992 [2024-11-20 11:30:48.661142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.992 [2024-11-20 11:30:48.661185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.992 qpair failed and we were unable to recover it. 00:29:55.992 [2024-11-20 11:30:48.661542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.992 [2024-11-20 11:30:48.661575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.992 qpair failed and we were unable to recover it. 00:29:55.992 [2024-11-20 11:30:48.661930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.992 [2024-11-20 11:30:48.661961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.992 qpair failed and we were unable to recover it. 00:29:55.992 [2024-11-20 11:30:48.662326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.992 [2024-11-20 11:30:48.662359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.992 qpair failed and we were unable to recover it. 00:29:55.992 [2024-11-20 11:30:48.662619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.992 [2024-11-20 11:30:48.662653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.992 qpair failed and we were unable to recover it. 00:29:55.992 [2024-11-20 11:30:48.663024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.992 [2024-11-20 11:30:48.663054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.992 qpair failed and we were unable to recover it. 00:29:55.992 [2024-11-20 11:30:48.663421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.992 [2024-11-20 11:30:48.663454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.992 qpair failed and we were unable to recover it. 00:29:55.992 [2024-11-20 11:30:48.663817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.992 [2024-11-20 11:30:48.663848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.992 qpair failed and we were unable to recover it. 00:29:55.992 [2024-11-20 11:30:48.664208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.992 [2024-11-20 11:30:48.664238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.992 qpair failed and we were unable to recover it. 00:29:55.992 [2024-11-20 11:30:48.664601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.992 [2024-11-20 11:30:48.664639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.992 qpair failed and we were unable to recover it. 00:29:55.992 [2024-11-20 11:30:48.664994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.992 [2024-11-20 11:30:48.665026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.992 qpair failed and we were unable to recover it. 00:29:55.992 [2024-11-20 11:30:48.665442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.992 [2024-11-20 11:30:48.665478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.992 qpair failed and we were unable to recover it. 00:29:55.992 [2024-11-20 11:30:48.665839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.992 [2024-11-20 11:30:48.665870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.992 qpair failed and we were unable to recover it. 00:29:55.992 [2024-11-20 11:30:48.666230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.992 [2024-11-20 11:30:48.666264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.992 qpair failed and we were unable to recover it. 00:29:55.992 [2024-11-20 11:30:48.666627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.992 [2024-11-20 11:30:48.666657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.992 qpair failed and we were unable to recover it. 00:29:55.992 [2024-11-20 11:30:48.667017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.992 [2024-11-20 11:30:48.667047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.992 qpair failed and we were unable to recover it. 00:29:55.992 [2024-11-20 11:30:48.667415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.992 [2024-11-20 11:30:48.667448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.992 qpair failed and we were unable to recover it. 00:29:55.992 [2024-11-20 11:30:48.667843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.992 [2024-11-20 11:30:48.667875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.992 qpair failed and we were unable to recover it. 00:29:55.992 [2024-11-20 11:30:48.668240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.992 [2024-11-20 11:30:48.668276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.992 qpair failed and we were unable to recover it. 00:29:55.992 [2024-11-20 11:30:48.670195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.992 [2024-11-20 11:30:48.670262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.992 qpair failed and we were unable to recover it. 00:29:55.992 [2024-11-20 11:30:48.670640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.992 [2024-11-20 11:30:48.670676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.992 qpair failed and we were unable to recover it. 00:29:55.992 [2024-11-20 11:30:48.671069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.992 [2024-11-20 11:30:48.671102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.992 qpair failed and we were unable to recover it. 00:29:55.992 [2024-11-20 11:30:48.671383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.992 [2024-11-20 11:30:48.671416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.992 qpair failed and we were unable to recover it. 00:29:55.992 [2024-11-20 11:30:48.671801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.992 [2024-11-20 11:30:48.671832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.992 qpair failed and we were unable to recover it. 00:29:55.992 [2024-11-20 11:30:48.672208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.992 [2024-11-20 11:30:48.672243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.992 qpair failed and we were unable to recover it. 00:29:55.992 [2024-11-20 11:30:48.672603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.992 [2024-11-20 11:30:48.672635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.992 qpair failed and we were unable to recover it. 00:29:55.992 [2024-11-20 11:30:48.672985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.992 [2024-11-20 11:30:48.673015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-20 11:30:48.673387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-20 11:30:48.673421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-20 11:30:48.673770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-20 11:30:48.673803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-20 11:30:48.674177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-20 11:30:48.674209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-20 11:30:48.674557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-20 11:30:48.674588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-20 11:30:48.674949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-20 11:30:48.674980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-20 11:30:48.675342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-20 11:30:48.675374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-20 11:30:48.675747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-20 11:30:48.675779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-20 11:30:48.676144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-20 11:30:48.676189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-20 11:30:48.676517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-20 11:30:48.676548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-20 11:30:48.676900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-20 11:30:48.676931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-20 11:30:48.677365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-20 11:30:48.677399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-20 11:30:48.677752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-20 11:30:48.677783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-20 11:30:48.678140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-20 11:30:48.678191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-20 11:30:48.678582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-20 11:30:48.678615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-20 11:30:48.678975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-20 11:30:48.679006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-20 11:30:48.679383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-20 11:30:48.679420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-20 11:30:48.679815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-20 11:30:48.679848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-20 11:30:48.680215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-20 11:30:48.680249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-20 11:30:48.680621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-20 11:30:48.680653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-20 11:30:48.680991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-20 11:30:48.681024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-20 11:30:48.682812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-20 11:30:48.682885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-20 11:30:48.683315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-20 11:30:48.683352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-20 11:30:48.684555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-20 11:30:48.684603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-20 11:30:48.685036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-20 11:30:48.685070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-20 11:30:48.685431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-20 11:30:48.685466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-20 11:30:48.685823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-20 11:30:48.685856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-20 11:30:48.686261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-20 11:30:48.686295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-20 11:30:48.686675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-20 11:30:48.686711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-20 11:30:48.687103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-20 11:30:48.687135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-20 11:30:48.687527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-20 11:30:48.687561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-20 11:30:48.687955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-20 11:30:48.687986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-20 11:30:48.688391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-20 11:30:48.688424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-20 11:30:48.688777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-20 11:30:48.688809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-20 11:30:48.689157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-20 11:30:48.689204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-20 11:30:48.689562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.993 [2024-11-20 11:30:48.689595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.993 qpair failed and we were unable to recover it. 00:29:55.993 [2024-11-20 11:30:48.689965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-20 11:30:48.689997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-20 11:30:48.690353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-20 11:30:48.690385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-20 11:30:48.690640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-20 11:30:48.690679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-20 11:30:48.691024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-20 11:30:48.691056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-20 11:30:48.691406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-20 11:30:48.691437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-20 11:30:48.691783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-20 11:30:48.691817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-20 11:30:48.692175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-20 11:30:48.692209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-20 11:30:48.692524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-20 11:30:48.692555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-20 11:30:48.692916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-20 11:30:48.692946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-20 11:30:48.693129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-20 11:30:48.693157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-20 11:30:48.693567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-20 11:30:48.693599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-20 11:30:48.693965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-20 11:30:48.693996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-20 11:30:48.694342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-20 11:30:48.694374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-20 11:30:48.694731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-20 11:30:48.694763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-20 11:30:48.695024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-20 11:30:48.695055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-20 11:30:48.695394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-20 11:30:48.695433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-20 11:30:48.695780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-20 11:30:48.695811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-20 11:30:48.696193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-20 11:30:48.696227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-20 11:30:48.696588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-20 11:30:48.696618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-20 11:30:48.696854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-20 11:30:48.696888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-20 11:30:48.697231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-20 11:30:48.697287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-20 11:30:48.697651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-20 11:30:48.697682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-20 11:30:48.698052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-20 11:30:48.698084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-20 11:30:48.698418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-20 11:30:48.698452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-20 11:30:48.698794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-20 11:30:48.698825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-20 11:30:48.699182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-20 11:30:48.699217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-20 11:30:48.699536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-20 11:30:48.699566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-20 11:30:48.699919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-20 11:30:48.699950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-20 11:30:48.700302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-20 11:30:48.700333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-20 11:30:48.700590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-20 11:30:48.700621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-20 11:30:48.700963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-20 11:30:48.700995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-20 11:30:48.701365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-20 11:30:48.701397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-20 11:30:48.701760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-20 11:30:48.701793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-20 11:30:48.702133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.994 [2024-11-20 11:30:48.702192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.994 qpair failed and we were unable to recover it. 00:29:55.994 [2024-11-20 11:30:48.702434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-20 11:30:48.702465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-20 11:30:48.702832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-20 11:30:48.702862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-20 11:30:48.703207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-20 11:30:48.703240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-20 11:30:48.703637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-20 11:30:48.703669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-20 11:30:48.704036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-20 11:30:48.704067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-20 11:30:48.704404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-20 11:30:48.704435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-20 11:30:48.704797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-20 11:30:48.704827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-20 11:30:48.705199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-20 11:30:48.705231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-20 11:30:48.705586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-20 11:30:48.705616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-20 11:30:48.705966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-20 11:30:48.706001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-20 11:30:48.706341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-20 11:30:48.706375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-20 11:30:48.706736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-20 11:30:48.706768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-20 11:30:48.707112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-20 11:30:48.707143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-20 11:30:48.707412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-20 11:30:48.707446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-20 11:30:48.707789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-20 11:30:48.707819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-20 11:30:48.708189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-20 11:30:48.708223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-20 11:30:48.708463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-20 11:30:48.708493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-20 11:30:48.708840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-20 11:30:48.708871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-20 11:30:48.709221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-20 11:30:48.709255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-20 11:30:48.709600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-20 11:30:48.709631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-20 11:30:48.709999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-20 11:30:48.710032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-20 11:30:48.710280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-20 11:30:48.710311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-20 11:30:48.712674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-20 11:30:48.712755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-20 11:30:48.713157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-20 11:30:48.713212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-20 11:30:48.713602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-20 11:30:48.713634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-20 11:30:48.713977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-20 11:30:48.714010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-20 11:30:48.714394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-20 11:30:48.714427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-20 11:30:48.714793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-20 11:30:48.714824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:55.995 [2024-11-20 11:30:48.715184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.995 [2024-11-20 11:30:48.715219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:55.995 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-20 11:30:48.715590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-20 11:30:48.715623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-20 11:30:48.715987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-20 11:30:48.716020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-20 11:30:48.716344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-20 11:30:48.716377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-20 11:30:48.716734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-20 11:30:48.716764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-20 11:30:48.717120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-20 11:30:48.717154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-20 11:30:48.717532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-20 11:30:48.717565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-20 11:30:48.717912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-20 11:30:48.717943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-20 11:30:48.718282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-20 11:30:48.718315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-20 11:30:48.718558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-20 11:30:48.718589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-20 11:30:48.718954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-20 11:30:48.718986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-20 11:30:48.719343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-20 11:30:48.719376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-20 11:30:48.719692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-20 11:30:48.719725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-20 11:30:48.720064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-20 11:30:48.720095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-20 11:30:48.720432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-20 11:30:48.720466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-20 11:30:48.720698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-20 11:30:48.720729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-20 11:30:48.721112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-20 11:30:48.721141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-20 11:30:48.721492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-20 11:30:48.721524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-20 11:30:48.721879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-20 11:30:48.721910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-20 11:30:48.722232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-20 11:30:48.722266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-20 11:30:48.722502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-20 11:30:48.722532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-20 11:30:48.722873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-20 11:30:48.722912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-20 11:30:48.723278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-20 11:30:48.723310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-20 11:30:48.723625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-20 11:30:48.723658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-20 11:30:48.724022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-20 11:30:48.724054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-20 11:30:48.724419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-20 11:30:48.724452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-20 11:30:48.724804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.266 [2024-11-20 11:30:48.724838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.266 qpair failed and we were unable to recover it. 00:29:56.266 [2024-11-20 11:30:48.725065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-20 11:30:48.725095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-20 11:30:48.725460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-20 11:30:48.725493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-20 11:30:48.725817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-20 11:30:48.725848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-20 11:30:48.726200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-20 11:30:48.726234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-20 11:30:48.726590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-20 11:30:48.726622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-20 11:30:48.726959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-20 11:30:48.726989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-20 11:30:48.727345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-20 11:30:48.727377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-20 11:30:48.727716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-20 11:30:48.727746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-20 11:30:48.728095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-20 11:30:48.728126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-20 11:30:48.728463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-20 11:30:48.728497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-20 11:30:48.728869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-20 11:30:48.728899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-20 11:30:48.729091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-20 11:30:48.729121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-20 11:30:48.729497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-20 11:30:48.729530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-20 11:30:48.729871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-20 11:30:48.729903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-20 11:30:48.730245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-20 11:30:48.730278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-20 11:30:48.730650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-20 11:30:48.730681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-20 11:30:48.731037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-20 11:30:48.731070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-20 11:30:48.731403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-20 11:30:48.731434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-20 11:30:48.731795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-20 11:30:48.731826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-20 11:30:48.732189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-20 11:30:48.732222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-20 11:30:48.732602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-20 11:30:48.732633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-20 11:30:48.732993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-20 11:30:48.733024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-20 11:30:48.733393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-20 11:30:48.733427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-20 11:30:48.733777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-20 11:30:48.733810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-20 11:30:48.734182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-20 11:30:48.734214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-20 11:30:48.734580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-20 11:30:48.734613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-20 11:30:48.734977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-20 11:30:48.735009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-20 11:30:48.735431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-20 11:30:48.735465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-20 11:30:48.735812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-20 11:30:48.735843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-20 11:30:48.736212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-20 11:30:48.736244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-20 11:30:48.736619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-20 11:30:48.736649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-20 11:30:48.737009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-20 11:30:48.737040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-20 11:30:48.737410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-20 11:30:48.737442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-20 11:30:48.737710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-20 11:30:48.737740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-20 11:30:48.738103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-20 11:30:48.738133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.267 [2024-11-20 11:30:48.738484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.267 [2024-11-20 11:30:48.738523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.267 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.738913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.738946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.739338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.739370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.739730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.739762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.740116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.740146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.740436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.740469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.740829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.740860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.741215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.741248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.741596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.741627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.741997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.742027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.742388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.742422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.742777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.742809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.743183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.743215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.743549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.743579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.743952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.743983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.744344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.744379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.744733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.744764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.745118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.745149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.745558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.745590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.745930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.745959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.746320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.746353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.746739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.746770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.747133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.747175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.747529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.747559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.747923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.747953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.748322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.748355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.748703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.748736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.749030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.749068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.749425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.749456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.749814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.749845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.750249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.750282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.750653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.750685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.751048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.751081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.751442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.751476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.751828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.751858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.752220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.752253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.752638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.752669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.753018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.753049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.753427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.268 [2024-11-20 11:30:48.753461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.268 qpair failed and we were unable to recover it. 00:29:56.268 [2024-11-20 11:30:48.753818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-20 11:30:48.753848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-20 11:30:48.754208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-20 11:30:48.754242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-20 11:30:48.754623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-20 11:30:48.754654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-20 11:30:48.755022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-20 11:30:48.755056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-20 11:30:48.755413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-20 11:30:48.755446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-20 11:30:48.755818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-20 11:30:48.755850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-20 11:30:48.756208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-20 11:30:48.756239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-20 11:30:48.756618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-20 11:30:48.756650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-20 11:30:48.756999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-20 11:30:48.757030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-20 11:30:48.757369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-20 11:30:48.757402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-20 11:30:48.757760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-20 11:30:48.757792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-20 11:30:48.758196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-20 11:30:48.758229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-20 11:30:48.758576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-20 11:30:48.758608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-20 11:30:48.758987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-20 11:30:48.759017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-20 11:30:48.759347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-20 11:30:48.759383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-20 11:30:48.759739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-20 11:30:48.759770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-20 11:30:48.760126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-20 11:30:48.760173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-20 11:30:48.760549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-20 11:30:48.760581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-20 11:30:48.760938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-20 11:30:48.760969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-20 11:30:48.761333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-20 11:30:48.761365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-20 11:30:48.761728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-20 11:30:48.761761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-20 11:30:48.762115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-20 11:30:48.762147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-20 11:30:48.762544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-20 11:30:48.762576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-20 11:30:48.762932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-20 11:30:48.762962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-20 11:30:48.763368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-20 11:30:48.763400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-20 11:30:48.763755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-20 11:30:48.763787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-20 11:30:48.764143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-20 11:30:48.764186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-20 11:30:48.764446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-20 11:30:48.764478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-20 11:30:48.764877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-20 11:30:48.764908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-20 11:30:48.765271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-20 11:30:48.765310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-20 11:30:48.765548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-20 11:30:48.765582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-20 11:30:48.765921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-20 11:30:48.765953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-20 11:30:48.766306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-20 11:30:48.766339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-20 11:30:48.766698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-20 11:30:48.766730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-20 11:30:48.767088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-11-20 11:30:48.767120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.269 qpair failed and we were unable to recover it. 00:29:56.269 [2024-11-20 11:30:48.767476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.767509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-20 11:30:48.767863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.767894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-20 11:30:48.768313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.768345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-20 11:30:48.768702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.768733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-20 11:30:48.769086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.769116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-20 11:30:48.769476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.769509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-20 11:30:48.769869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.769899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-20 11:30:48.770253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.770285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-20 11:30:48.770651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.770684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-20 11:30:48.770914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.770947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-20 11:30:48.771304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.771336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-20 11:30:48.771709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.771742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-20 11:30:48.772142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.772185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-20 11:30:48.772568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.772599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-20 11:30:48.772959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.772992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-20 11:30:48.773365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.773396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-20 11:30:48.773803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.773835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-20 11:30:48.774202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.774236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-20 11:30:48.774591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.774626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-20 11:30:48.774973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.775004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-20 11:30:48.775346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.775378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-20 11:30:48.775626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.775662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-20 11:30:48.776012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.776044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-20 11:30:48.776407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.776440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-20 11:30:48.776805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.776837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-20 11:30:48.777090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.777122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-20 11:30:48.777513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.777546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-20 11:30:48.777899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.777929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-20 11:30:48.778301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.778344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-20 11:30:48.778757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.778807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-20 11:30:48.779219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.779276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-20 11:30:48.779687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.779736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-20 11:30:48.780192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.780230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-20 11:30:48.780600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.780630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-20 11:30:48.780920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.780952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-20 11:30:48.781280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.781312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.270 qpair failed and we were unable to recover it. 00:29:56.270 [2024-11-20 11:30:48.781709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-11-20 11:30:48.781741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-20 11:30:48.782118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-20 11:30:48.782148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-20 11:30:48.782495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-20 11:30:48.782527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-20 11:30:48.782925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-20 11:30:48.782955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-20 11:30:48.783326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-20 11:30:48.783361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-20 11:30:48.783731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-20 11:30:48.783761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-20 11:30:48.784125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-20 11:30:48.784155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-20 11:30:48.784540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-20 11:30:48.784571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-20 11:30:48.784943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-20 11:30:48.784972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-20 11:30:48.785283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-20 11:30:48.785315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-20 11:30:48.785665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-20 11:30:48.785696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-20 11:30:48.786056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-20 11:30:48.786088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-20 11:30:48.786447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-20 11:30:48.786478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-20 11:30:48.786852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-20 11:30:48.786885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-20 11:30:48.787242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-20 11:30:48.787275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-20 11:30:48.787531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-20 11:30:48.787560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-20 11:30:48.787921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-20 11:30:48.787951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-20 11:30:48.788328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-20 11:30:48.788362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-20 11:30:48.788716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-20 11:30:48.788746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-20 11:30:48.789109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-20 11:30:48.789139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-20 11:30:48.789610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-20 11:30:48.789644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-20 11:30:48.790005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-20 11:30:48.790036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-20 11:30:48.790382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-20 11:30:48.790414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-20 11:30:48.790767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-20 11:30:48.790798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-20 11:30:48.791179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-20 11:30:48.791212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-20 11:30:48.791471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-20 11:30:48.791503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-20 11:30:48.791858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-20 11:30:48.791896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-20 11:30:48.792272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-20 11:30:48.792304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-20 11:30:48.792658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-20 11:30:48.792688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-20 11:30:48.793053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-20 11:30:48.793085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-20 11:30:48.793387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-20 11:30:48.793418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-20 11:30:48.793856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-20 11:30:48.793888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-20 11:30:48.794241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-20 11:30:48.794274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-20 11:30:48.794653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-20 11:30:48.794684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.271 qpair failed and we were unable to recover it. 00:29:56.271 [2024-11-20 11:30:48.795025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-11-20 11:30:48.795056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.795454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.795489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.795769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.795799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.796150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.796198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.796566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.796598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.797000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.797032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.797420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.797453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.797804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.797835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.798208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.798240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.798615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.798649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.798923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.798953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.799312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.799346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.799710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.799742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.799993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.800022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.800380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.800412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.800769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.800801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.801189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.801225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.801613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.801647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.802006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.802038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.802290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.802323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.802757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.802789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.803143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.803186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.803523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.803556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.803926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.803957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.804333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.804365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.804736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.804768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.805129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.805183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.805588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.805620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.805970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.806001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.806367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.806399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.806769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.806799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.807183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.807215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.807561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.807591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.807838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.807870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.808203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.808236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.808489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.808519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.808876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.808907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.809256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.809287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.809666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.272 [2024-11-20 11:30:48.809697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-11-20 11:30:48.810072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.810104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.810498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.810531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.810887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.810920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.811281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.811312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.811570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.811600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.811938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.811969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.812320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.812350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.812721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.812752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.813106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.813137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.813396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.813428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.813786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.813817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.814186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.814217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.814577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.814606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.814987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.815019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.815374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.815406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.815650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.815686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.816068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.816098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.816472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.816504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.816873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.816905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.817273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.817306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.817665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.817696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.817934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.817972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.818364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.818397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.818763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.818795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.819178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.819211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.819576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.819608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.819972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.820004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.820262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.820295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.820674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.820705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.821067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.821100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.821370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.821404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.821793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.821825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.822184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.822217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.822586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.822618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.822980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.823012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.823433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.823465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.823827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.823859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.824216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.824249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-11-20 11:30:48.824627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.273 [2024-11-20 11:30:48.824658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-20 11:30:48.824989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-20 11:30:48.825021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-20 11:30:48.825382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-20 11:30:48.825415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-20 11:30:48.825776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-20 11:30:48.825808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-20 11:30:48.826154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-20 11:30:48.826201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-20 11:30:48.826582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-20 11:30:48.826613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-20 11:30:48.826979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-20 11:30:48.827010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-20 11:30:48.827391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-20 11:30:48.827424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-20 11:30:48.827671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-20 11:30:48.827703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-20 11:30:48.828051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-20 11:30:48.828084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-20 11:30:48.828442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-20 11:30:48.828476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-20 11:30:48.828750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-20 11:30:48.828786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-20 11:30:48.829137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-20 11:30:48.829196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-20 11:30:48.829434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-20 11:30:48.829463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-20 11:30:48.829816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-20 11:30:48.829849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-20 11:30:48.830209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-20 11:30:48.830243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-20 11:30:48.830608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-20 11:30:48.830638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-20 11:30:48.830863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-20 11:30:48.830893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-20 11:30:48.831244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-20 11:30:48.831275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-20 11:30:48.831648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-20 11:30:48.831677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-20 11:30:48.832046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-20 11:30:48.832076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-20 11:30:48.832459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-20 11:30:48.832490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-20 11:30:48.832745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-20 11:30:48.832776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-20 11:30:48.833122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-20 11:30:48.833153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-20 11:30:48.833530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-20 11:30:48.833569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-20 11:30:48.833923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-20 11:30:48.833954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-20 11:30:48.834312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-20 11:30:48.834345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-20 11:30:48.834702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-20 11:30:48.834735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-20 11:30:48.835110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-20 11:30:48.835139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-20 11:30:48.835520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-20 11:30:48.835552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-20 11:30:48.835796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-20 11:30:48.835826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-11-20 11:30:48.836186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.274 [2024-11-20 11:30:48.836219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-20 11:30:48.836474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-20 11:30:48.836504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-20 11:30:48.836874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-20 11:30:48.836906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-20 11:30:48.837292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-20 11:30:48.837322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-20 11:30:48.837684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-20 11:30:48.837715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-20 11:30:48.838079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-20 11:30:48.838109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-20 11:30:48.838483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-20 11:30:48.838514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-20 11:30:48.838866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-20 11:30:48.838898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-20 11:30:48.839183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-20 11:30:48.839215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-20 11:30:48.839590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-20 11:30:48.839620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-20 11:30:48.839975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-20 11:30:48.840008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-20 11:30:48.840469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-20 11:30:48.840501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-20 11:30:48.840726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-20 11:30:48.840755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-20 11:30:48.841101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-20 11:30:48.841131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-20 11:30:48.841560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-20 11:30:48.841591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-20 11:30:48.841961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-20 11:30:48.841994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-20 11:30:48.842339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-20 11:30:48.842372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-20 11:30:48.842730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-20 11:30:48.842761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-20 11:30:48.843112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-20 11:30:48.843143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-20 11:30:48.843455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-20 11:30:48.843487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-20 11:30:48.843835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-20 11:30:48.843874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-20 11:30:48.844105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-20 11:30:48.844136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-20 11:30:48.844534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-20 11:30:48.844565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-20 11:30:48.844923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-20 11:30:48.844954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-20 11:30:48.845197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-20 11:30:48.845228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-20 11:30:48.845593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-20 11:30:48.845624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-20 11:30:48.845996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-20 11:30:48.846029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-20 11:30:48.846415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-20 11:30:48.846448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-20 11:30:48.846803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-20 11:30:48.846836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-20 11:30:48.847199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-20 11:30:48.847232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-20 11:30:48.847605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-20 11:30:48.847636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-20 11:30:48.848011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-20 11:30:48.848042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-20 11:30:48.848382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.275 [2024-11-20 11:30:48.848415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.275 qpair failed and we were unable to recover it. 00:29:56.275 [2024-11-20 11:30:48.848775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-20 11:30:48.848807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-20 11:30:48.849179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-20 11:30:48.849212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-20 11:30:48.849618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-20 11:30:48.849649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-20 11:30:48.850020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-20 11:30:48.850052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-20 11:30:48.850283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-20 11:30:48.850314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-20 11:30:48.850649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-20 11:30:48.850682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-20 11:30:48.850913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-20 11:30:48.850943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-20 11:30:48.851314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-20 11:30:48.851348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-20 11:30:48.851712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-20 11:30:48.851742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-20 11:30:48.852112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-20 11:30:48.852143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-20 11:30:48.852518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-20 11:30:48.852550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-20 11:30:48.852919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-20 11:30:48.852952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-20 11:30:48.853327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-20 11:30:48.853359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-20 11:30:48.853719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-20 11:30:48.853751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-20 11:30:48.853996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-20 11:30:48.854027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-20 11:30:48.854273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-20 11:30:48.854306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-20 11:30:48.854547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-20 11:30:48.854578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-20 11:30:48.854843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-20 11:30:48.854877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-20 11:30:48.855233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-20 11:30:48.855265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-20 11:30:48.855649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-20 11:30:48.855680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-20 11:30:48.856042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-20 11:30:48.856073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-20 11:30:48.856463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-20 11:30:48.856496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-20 11:30:48.856863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-20 11:30:48.856895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-20 11:30:48.857245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-20 11:30:48.857277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-20 11:30:48.857736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-20 11:30:48.857769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-20 11:30:48.858027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-20 11:30:48.858060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-20 11:30:48.858418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-20 11:30:48.858451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-20 11:30:48.858811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-20 11:30:48.858842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-20 11:30:48.859200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-20 11:30:48.859238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-20 11:30:48.859569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-20 11:30:48.859599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-20 11:30:48.859960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-20 11:30:48.859993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-20 11:30:48.860345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-20 11:30:48.860377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-20 11:30:48.860743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-20 11:30:48.860774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.276 [2024-11-20 11:30:48.861134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.276 [2024-11-20 11:30:48.861173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.276 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.861611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.861641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.862001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.862033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.862400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.862433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.862807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.862839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.863083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.863113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.863518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.863551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.863906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.863939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.864298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.864330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.864702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.864733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.865074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.865103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.865505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.865537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.865894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.865925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.866182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.866214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.866510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.866542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.866866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.866897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.867249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.867284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.867497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.867527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.867885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.867915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.868280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.868313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.868579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.868610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.868970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.869001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.870754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.870828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.871274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.871313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.873027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.873083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.873483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.873519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.873878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.873909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.874279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.874310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.874659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.874691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.875040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.875070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.875425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.875457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.875805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.875836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.877644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.877705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.878104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.878141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.878566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.878600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.878840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.878869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.879264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.879297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.879673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.277 [2024-11-20 11:30:48.879704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.277 qpair failed and we were unable to recover it. 00:29:56.277 [2024-11-20 11:30:48.880047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-20 11:30:48.880079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-20 11:30:48.880445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-20 11:30:48.880476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-20 11:30:48.880833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-20 11:30:48.880864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-20 11:30:48.881232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-20 11:30:48.881265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-20 11:30:48.881620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-20 11:30:48.881650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-20 11:30:48.882019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-20 11:30:48.882051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-20 11:30:48.882468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-20 11:30:48.882500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-20 11:30:48.882886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-20 11:30:48.882917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-20 11:30:48.883267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-20 11:30:48.883299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-20 11:30:48.883665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-20 11:30:48.883694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-20 11:30:48.884077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-20 11:30:48.884107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-20 11:30:48.884499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-20 11:30:48.884532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-20 11:30:48.884888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-20 11:30:48.884919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-20 11:30:48.885280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-20 11:30:48.885312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-20 11:30:48.885678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-20 11:30:48.885708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-20 11:30:48.886056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-20 11:30:48.886087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-20 11:30:48.886444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-20 11:30:48.886476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-20 11:30:48.886870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-20 11:30:48.886903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-20 11:30:48.887306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-20 11:30:48.887341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-20 11:30:48.887699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-20 11:30:48.887729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-20 11:30:48.888087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-20 11:30:48.888119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-20 11:30:48.888501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-20 11:30:48.888533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-20 11:30:48.888892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-20 11:30:48.888925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-20 11:30:48.889280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-20 11:30:48.889312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-20 11:30:48.891040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-20 11:30:48.891103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-20 11:30:48.891690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-20 11:30:48.891740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-20 11:30:48.892098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-20 11:30:48.892130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-20 11:30:48.892494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-20 11:30:48.892527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-20 11:30:48.892878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-20 11:30:48.892909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-20 11:30:48.893250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-20 11:30:48.893285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-20 11:30:48.893638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-20 11:30:48.893668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-20 11:30:48.894040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-20 11:30:48.894071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-20 11:30:48.894322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.278 [2024-11-20 11:30:48.894354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.278 qpair failed and we were unable to recover it. 00:29:56.278 [2024-11-20 11:30:48.894692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-20 11:30:48.894723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-20 11:30:48.895071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-20 11:30:48.895101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-20 11:30:48.895461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-20 11:30:48.895494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-20 11:30:48.895850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-20 11:30:48.895882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-20 11:30:48.896133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-20 11:30:48.896179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-20 11:30:48.896545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-20 11:30:48.896575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-20 11:30:48.896958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-20 11:30:48.896990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-20 11:30:48.897346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-20 11:30:48.897377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-20 11:30:48.897720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-20 11:30:48.897750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-20 11:30:48.898098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-20 11:30:48.898130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-20 11:30:48.898510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-20 11:30:48.898540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-20 11:30:48.898902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-20 11:30:48.898932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-20 11:30:48.899302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-20 11:30:48.899334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-20 11:30:48.899670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-20 11:30:48.899703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-20 11:30:48.900055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-20 11:30:48.900085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-20 11:30:48.900447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-20 11:30:48.900481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-20 11:30:48.900831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-20 11:30:48.900861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-20 11:30:48.901221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-20 11:30:48.901254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-20 11:30:48.901629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-20 11:30:48.901660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-20 11:30:48.901898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-20 11:30:48.901939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-20 11:30:48.902315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-20 11:30:48.902346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-20 11:30:48.902702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-20 11:30:48.902733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-20 11:30:48.903097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-20 11:30:48.903129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-20 11:30:48.903492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-20 11:30:48.903524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-20 11:30:48.903874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-20 11:30:48.903903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-20 11:30:48.904289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-20 11:30:48.904325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-20 11:30:48.904684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-20 11:30:48.904713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-20 11:30:48.905077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-20 11:30:48.905108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-20 11:30:48.905497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-20 11:30:48.905529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-20 11:30:48.905890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-20 11:30:48.905920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-20 11:30:48.906285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-20 11:30:48.906317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-20 11:30:48.906676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-20 11:30:48.906709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-20 11:30:48.907056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-20 11:30:48.907089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-20 11:30:48.907445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.279 [2024-11-20 11:30:48.907479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.279 qpair failed and we were unable to recover it. 00:29:56.279 [2024-11-20 11:30:48.907831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-20 11:30:48.907862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-20 11:30:48.908233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-20 11:30:48.908264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-20 11:30:48.908623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-20 11:30:48.908653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-20 11:30:48.908899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-20 11:30:48.908932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-20 11:30:48.909295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-20 11:30:48.909327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-20 11:30:48.909555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-20 11:30:48.909584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-20 11:30:48.909950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-20 11:30:48.909980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-20 11:30:48.910243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-20 11:30:48.910277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-20 11:30:48.910624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-20 11:30:48.910654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-20 11:30:48.911016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-20 11:30:48.911047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-20 11:30:48.911413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-20 11:30:48.911448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-20 11:30:48.911798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-20 11:30:48.911829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-20 11:30:48.912242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-20 11:30:48.912275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-20 11:30:48.912664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-20 11:30:48.912695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-20 11:30:48.913052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-20 11:30:48.913083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-20 11:30:48.913433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-20 11:30:48.913465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-20 11:30:48.913808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-20 11:30:48.913839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-20 11:30:48.914095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-20 11:30:48.914125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-20 11:30:48.915873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-20 11:30:48.915936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-20 11:30:48.916376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-20 11:30:48.916415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-20 11:30:48.916771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-20 11:30:48.916802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-20 11:30:48.917179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-20 11:30:48.917212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-20 11:30:48.917556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-20 11:30:48.917589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-20 11:30:48.917943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-20 11:30:48.917972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-20 11:30:48.918340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-20 11:30:48.918372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-20 11:30:48.920610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-20 11:30:48.920680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-20 11:30:48.921124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-20 11:30:48.921203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-20 11:30:48.921586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-20 11:30:48.921618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-20 11:30:48.921848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-20 11:30:48.921877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-20 11:30:48.922240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-20 11:30:48.922273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-20 11:30:48.922637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-20 11:30:48.922668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-20 11:30:48.923026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-20 11:30:48.923058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-20 11:30:48.923413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-20 11:30:48.923444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-20 11:30:48.925148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.280 [2024-11-20 11:30:48.925281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.280 qpair failed and we were unable to recover it. 00:29:56.280 [2024-11-20 11:30:48.925683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.925716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.927296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.927352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.927777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.927812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.928201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.928233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.928592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.928623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.928979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.929012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.929386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.929420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.929693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.929728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.930094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.930124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.930519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.930553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.930906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.930937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.931308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.931341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.931693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.931725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.932080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.932110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.932471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.932504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.932867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.932900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.933264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.933296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.933671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.933702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.934033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.934063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.934311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.934349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.934702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.934734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.935092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.935124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.935505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.935537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.935745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.935775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.936171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.936203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.936556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.936586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.936955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.936988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.937417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.937449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.937795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.937826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.938182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.938214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.938564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.938595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.938833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.938863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.939231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.939262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.939629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.939661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.940020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.940051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.940415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.940447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.940801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.940833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.941232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.281 [2024-11-20 11:30:48.941263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.281 qpair failed and we were unable to recover it. 00:29:56.281 [2024-11-20 11:30:48.941621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.941654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.942009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.942041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.942424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.942458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.942814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.942847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.943207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.943239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.943616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.943647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.944002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.944034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.944374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.944406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.944782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.944812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.945185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.945218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.945631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.945662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.946014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.946046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.946296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.946326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.946687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.946718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.947080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.947112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.947513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.947546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.947906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.947936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.948298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.948329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.948690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.948721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.948952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.948982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.949341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.949372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.949639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.949673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.949928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.949965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.950363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.950396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.950762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.950794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.951187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.951218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.951469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.951498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.951852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.951883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.952248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.952282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.952681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.952711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.953069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.953100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.953472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.953509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.953865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.953896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.954242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.954276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.954657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.954687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.955044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.955076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.955318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.955354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.955726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.955757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.282 qpair failed and we were unable to recover it. 00:29:56.282 [2024-11-20 11:30:48.956125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.282 [2024-11-20 11:30:48.956170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.956544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-20 11:30:48.956575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.956925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-20 11:30:48.956958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.957371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-20 11:30:48.957402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.957751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-20 11:30:48.957783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.958152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-20 11:30:48.958202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.958572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-20 11:30:48.958602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.958961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-20 11:30:48.958991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.959342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-20 11:30:48.959377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.959721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-20 11:30:48.959751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.960112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-20 11:30:48.960144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.961921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-20 11:30:48.961979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.962460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-20 11:30:48.962495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.964066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-20 11:30:48.964122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.964526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-20 11:30:48.964562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.964918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-20 11:30:48.964949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.965308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-20 11:30:48.965341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.965706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-20 11:30:48.965736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.966096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-20 11:30:48.966127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.966488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-20 11:30:48.966520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.966889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-20 11:30:48.966922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.967279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-20 11:30:48.967311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.968979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-20 11:30:48.969035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.969446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-20 11:30:48.969481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.971121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-20 11:30:48.971204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.971612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-20 11:30:48.971645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.972031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-20 11:30:48.972063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.972438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-20 11:30:48.972472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.972817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-20 11:30:48.972847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.973205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-20 11:30:48.973241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.973601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-20 11:30:48.973632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.974014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-20 11:30:48.974045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.974428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-20 11:30:48.974462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.974820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-20 11:30:48.974851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.975202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-20 11:30:48.975233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.975606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-20 11:30:48.975638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.975899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.283 [2024-11-20 11:30:48.975930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.283 qpair failed and we were unable to recover it. 00:29:56.283 [2024-11-20 11:30:48.976284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.976316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.976681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.976712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.977077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.977109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.977482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.977514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.977765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.977799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.978041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.978072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.978453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.978485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.978839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.978872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.979242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.979275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.980870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.980927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.981342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.981377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.981739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.981770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.982126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.982157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.982536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.982567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.982965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.982997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.983391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.983434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.983823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.983853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.984223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.984257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.984561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.984593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.984854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.984885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.985130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.985170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.985564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.985594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.985945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.985977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.986218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.986249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.986643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.986679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.987080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.987109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.987393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.987428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.987789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.987819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.988155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.988200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.988536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.988566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.988880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.988914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.989280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.989311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.989597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.989627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.989986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.990015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.990332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.990364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.990732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.990761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.991107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.991136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.991528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.284 [2024-11-20 11:30:48.991559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.284 qpair failed and we were unable to recover it. 00:29:56.284 [2024-11-20 11:30:48.991918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.285 [2024-11-20 11:30:48.991948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.285 qpair failed and we were unable to recover it. 00:29:56.285 [2024-11-20 11:30:48.992312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.285 [2024-11-20 11:30:48.992343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.285 qpair failed and we were unable to recover it. 00:29:56.285 [2024-11-20 11:30:48.992723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.285 [2024-11-20 11:30:48.992754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.285 qpair failed and we were unable to recover it. 00:29:56.285 [2024-11-20 11:30:48.993044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.285 [2024-11-20 11:30:48.993073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.285 qpair failed and we were unable to recover it. 00:29:56.285 [2024-11-20 11:30:48.993406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.285 [2024-11-20 11:30:48.993437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.285 qpair failed and we were unable to recover it. 00:29:56.285 [2024-11-20 11:30:48.993804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.285 [2024-11-20 11:30:48.993834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.285 qpair failed and we were unable to recover it. 00:29:56.285 [2024-11-20 11:30:48.994202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.285 [2024-11-20 11:30:48.994234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.285 qpair failed and we were unable to recover it. 00:29:56.285 [2024-11-20 11:30:48.994627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.285 [2024-11-20 11:30:48.994656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.285 qpair failed and we were unable to recover it. 00:29:56.285 [2024-11-20 11:30:48.994984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.285 [2024-11-20 11:30:48.995014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.285 qpair failed and we were unable to recover it. 00:29:56.285 [2024-11-20 11:30:48.995307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.285 [2024-11-20 11:30:48.995338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.285 qpair failed and we were unable to recover it. 00:29:56.556 [2024-11-20 11:30:48.995713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.556 [2024-11-20 11:30:48.995744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.556 qpair failed and we were unable to recover it. 00:29:56.556 [2024-11-20 11:30:48.996108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.556 [2024-11-20 11:30:48.996139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.556 qpair failed and we were unable to recover it. 00:29:56.556 [2024-11-20 11:30:48.996551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.556 [2024-11-20 11:30:48.996588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.556 qpair failed and we were unable to recover it. 00:29:56.556 [2024-11-20 11:30:48.996969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.556 [2024-11-20 11:30:48.996997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.556 qpair failed and we were unable to recover it. 00:29:56.556 [2024-11-20 11:30:48.997338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.556 [2024-11-20 11:30:48.997369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.556 qpair failed and we were unable to recover it. 00:29:56.556 [2024-11-20 11:30:48.997735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.556 [2024-11-20 11:30:48.997768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.556 qpair failed and we were unable to recover it. 00:29:56.556 [2024-11-20 11:30:48.998115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.556 [2024-11-20 11:30:48.998147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.556 qpair failed and we were unable to recover it. 00:29:56.556 [2024-11-20 11:30:48.998540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.556 [2024-11-20 11:30:48.998569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.556 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:48.998931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:48.998967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:48.999319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:48.999350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:48.999728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:48.999757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:49.000024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:49.000056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:49.000395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:49.000427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:49.000833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:49.000862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:49.001238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:49.001268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:49.001710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:49.001739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:49.002033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:49.002063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:49.002414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:49.002443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:49.002765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:49.002796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:49.003149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:49.003192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:49.003526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:49.003554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:49.003910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:49.003939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:49.004320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:49.004351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:49.004724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:49.004753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:49.005116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:49.005144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:49.005513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:49.005542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:49.005928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:49.005957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:49.006347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:49.006378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:49.006647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:49.006679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:49.007017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:49.007046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:49.007406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:49.007438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:49.007805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:49.007836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:49.008217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:49.008248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:49.008631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:49.008661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:49.009019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:49.009048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:49.009474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:49.009511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:49.009853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:49.009884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:49.010247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:49.010281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:49.010663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:49.010692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:49.011044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:49.011074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:49.011441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:49.011472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:49.011810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:49.011839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:49.012001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:49.012035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:49.012391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:49.012422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:49.012785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:49.012813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:49.013185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.557 [2024-11-20 11:30:49.013218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.557 qpair failed and we were unable to recover it. 00:29:56.557 [2024-11-20 11:30:49.013582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-20 11:30:49.013612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-20 11:30:49.013976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-20 11:30:49.014003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-20 11:30:49.014373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-20 11:30:49.014404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-20 11:30:49.014765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-20 11:30:49.014797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-20 11:30:49.015175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-20 11:30:49.015206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-20 11:30:49.015460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-20 11:30:49.015488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-20 11:30:49.015911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-20 11:30:49.015941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-20 11:30:49.016237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-20 11:30:49.016266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-20 11:30:49.016652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-20 11:30:49.016682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-20 11:30:49.017054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-20 11:30:49.017084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-20 11:30:49.017423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-20 11:30:49.017453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-20 11:30:49.017811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-20 11:30:49.017839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-20 11:30:49.018085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-20 11:30:49.018116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-20 11:30:49.018533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-20 11:30:49.018565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-20 11:30:49.018923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-20 11:30:49.018951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-20 11:30:49.019319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-20 11:30:49.019350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-20 11:30:49.019719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-20 11:30:49.019750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-20 11:30:49.020155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-20 11:30:49.020207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-20 11:30:49.020603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-20 11:30:49.020631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-20 11:30:49.020972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-20 11:30:49.021004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-20 11:30:49.021373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-20 11:30:49.021405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-20 11:30:49.021804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-20 11:30:49.021833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-20 11:30:49.022241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-20 11:30:49.022271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-20 11:30:49.022623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-20 11:30:49.022652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-20 11:30:49.023008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-20 11:30:49.023038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-20 11:30:49.023392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-20 11:30:49.023423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-20 11:30:49.023781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-20 11:30:49.023809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-20 11:30:49.024183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-20 11:30:49.024216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-20 11:30:49.024585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-20 11:30:49.024614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-20 11:30:49.024976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-20 11:30:49.025004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-20 11:30:49.025385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-20 11:30:49.025424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-20 11:30:49.025848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-20 11:30:49.025878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-20 11:30:49.026245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-20 11:30:49.026277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-20 11:30:49.026660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-20 11:30:49.026688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 [2024-11-20 11:30:49.027042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.558 [2024-11-20 11:30:49.027072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:56.558 qpair failed and we were unable to recover it. 00:29:56.558 Read completed with error (sct=0, sc=8) 00:29:56.558 starting I/O failed 00:29:56.558 Read completed with error (sct=0, sc=8) 00:29:56.558 starting I/O failed 00:29:56.558 Read completed with error (sct=0, sc=8) 00:29:56.558 starting I/O failed 00:29:56.558 Read completed with error (sct=0, sc=8) 00:29:56.558 starting I/O failed 00:29:56.558 Read completed with error (sct=0, sc=8) 00:29:56.558 starting I/O failed 00:29:56.558 Read completed with error (sct=0, sc=8) 00:29:56.558 starting I/O failed 00:29:56.558 Read completed with error (sct=0, sc=8) 00:29:56.558 starting I/O failed 00:29:56.558 Read completed with error (sct=0, sc=8) 00:29:56.558 starting I/O failed 00:29:56.558 Read completed with error (sct=0, sc=8) 00:29:56.558 starting I/O failed 00:29:56.558 Read completed with error (sct=0, sc=8) 00:29:56.558 starting I/O failed 00:29:56.558 Read completed with error (sct=0, sc=8) 00:29:56.558 starting I/O failed 00:29:56.558 Read completed with error (sct=0, sc=8) 00:29:56.558 starting I/O failed 00:29:56.559 Read completed with error (sct=0, sc=8) 00:29:56.559 starting I/O failed 00:29:56.559 Write completed with error (sct=0, sc=8) 00:29:56.559 starting I/O failed 00:29:56.559 Write completed with error (sct=0, sc=8) 00:29:56.559 starting I/O failed 00:29:56.559 Read completed with error (sct=0, sc=8) 00:29:56.559 starting I/O failed 00:29:56.559 Read completed with error (sct=0, sc=8) 00:29:56.559 starting I/O failed 00:29:56.559 Read completed with error (sct=0, sc=8) 00:29:56.559 starting I/O failed 00:29:56.559 Read completed with error (sct=0, sc=8) 00:29:56.559 starting I/O failed 00:29:56.559 Read completed with error (sct=0, sc=8) 00:29:56.559 starting I/O failed 00:29:56.559 Write completed with error (sct=0, sc=8) 00:29:56.559 starting I/O failed 00:29:56.559 Write completed with error (sct=0, sc=8) 00:29:56.559 starting I/O failed 00:29:56.559 Write completed with error (sct=0, sc=8) 00:29:56.559 starting I/O failed 00:29:56.559 Write completed with error (sct=0, sc=8) 00:29:56.559 starting I/O failed 00:29:56.559 Read completed with error (sct=0, sc=8) 00:29:56.559 starting I/O failed 00:29:56.559 Read completed with error (sct=0, sc=8) 00:29:56.559 starting I/O failed 00:29:56.559 Read completed with error (sct=0, sc=8) 00:29:56.559 starting I/O failed 00:29:56.559 Read completed with error (sct=0, sc=8) 00:29:56.559 starting I/O failed 00:29:56.559 Write completed with error (sct=0, sc=8) 00:29:56.559 starting I/O failed 00:29:56.559 Read completed with error (sct=0, sc=8) 00:29:56.559 starting I/O failed 00:29:56.559 Write completed with error (sct=0, sc=8) 00:29:56.559 starting I/O failed 00:29:56.559 Write completed with error (sct=0, sc=8) 00:29:56.559 starting I/O failed 00:29:56.559 [2024-11-20 11:30:49.027891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.559 [2024-11-20 11:30:49.028562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-20 11:30:49.028665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-20 11:30:49.029101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-20 11:30:49.029139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-20 11:30:49.029551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-20 11:30:49.029586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-20 11:30:49.029949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-20 11:30:49.029988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-20 11:30:49.030376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-20 11:30:49.030410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-20 11:30:49.030806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-20 11:30:49.030838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-20 11:30:49.031215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-20 11:30:49.031246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-20 11:30:49.031586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-20 11:30:49.031616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-20 11:30:49.032018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-20 11:30:49.032048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-20 11:30:49.032400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-20 11:30:49.032431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-20 11:30:49.032793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-20 11:30:49.032821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-20 11:30:49.033188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-20 11:30:49.033225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-20 11:30:49.033498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-20 11:30:49.033532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-20 11:30:49.033958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-20 11:30:49.033987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-20 11:30:49.034359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-20 11:30:49.034389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-20 11:30:49.034739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-20 11:30:49.034770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-20 11:30:49.035136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-20 11:30:49.035174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-20 11:30:49.035538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-20 11:30:49.035566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-20 11:30:49.035916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-20 11:30:49.035946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-20 11:30:49.036318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-20 11:30:49.036349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-20 11:30:49.036683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-20 11:30:49.036712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-20 11:30:49.037107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-20 11:30:49.037137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-20 11:30:49.037508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-20 11:30:49.037539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-20 11:30:49.037919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-20 11:30:49.037949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-20 11:30:49.038318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-20 11:30:49.038348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-20 11:30:49.038607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-20 11:30:49.038640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-20 11:30:49.038979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-20 11:30:49.039009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-20 11:30:49.039258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-20 11:30:49.039288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-20 11:30:49.039683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-20 11:30:49.039719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-20 11:30:49.039893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.559 [2024-11-20 11:30:49.039922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.559 qpair failed and we were unable to recover it. 00:29:56.559 [2024-11-20 11:30:49.040296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.040326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.040604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.040632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.040996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.041025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.041368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.041399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.041640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.041668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.042045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.042075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.042366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.042396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.042792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.042820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.043248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.043278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.043667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.043695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.044106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.044137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.044434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.044464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.044728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.044756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.045010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.045038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.045283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.045315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.045537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.045565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.045830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.045858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.046219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.046249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.046597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.046626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.046995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.047024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.047389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.047419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.047790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.047818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.048192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.048221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.048597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.048628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.048981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.049009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.049377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.049409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.049777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.049806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.050149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.050208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.050598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.050627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.050991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.051019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.051383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.051413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.051782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.051811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.052204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.052237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.052596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.052625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.053001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.053031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.053428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.053459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.053815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.053844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.054072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.560 [2024-11-20 11:30:49.054101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.560 qpair failed and we were unable to recover it. 00:29:56.560 [2024-11-20 11:30:49.054366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-20 11:30:49.054402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-20 11:30:49.054770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-20 11:30:49.054798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-20 11:30:49.055157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-20 11:30:49.055195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-20 11:30:49.055557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-20 11:30:49.055585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-20 11:30:49.055964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-20 11:30:49.055993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-20 11:30:49.056252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-20 11:30:49.056285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-20 11:30:49.056655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-20 11:30:49.056685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-20 11:30:49.057043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-20 11:30:49.057071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-20 11:30:49.057454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-20 11:30:49.057484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-20 11:30:49.057846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-20 11:30:49.057875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-20 11:30:49.058247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-20 11:30:49.058277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-20 11:30:49.058674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-20 11:30:49.058701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-20 11:30:49.059062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-20 11:30:49.059091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-20 11:30:49.059465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-20 11:30:49.059495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-20 11:30:49.059735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-20 11:30:49.059767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-20 11:30:49.060141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-20 11:30:49.060179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-20 11:30:49.060562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-20 11:30:49.060592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-20 11:30:49.060958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-20 11:30:49.060988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-20 11:30:49.061358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-20 11:30:49.061388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-20 11:30:49.061647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-20 11:30:49.061679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-20 11:30:49.062044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-20 11:30:49.062074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-20 11:30:49.062500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-20 11:30:49.062530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-20 11:30:49.062888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-20 11:30:49.062918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-20 11:30:49.063291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-20 11:30:49.063319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-20 11:30:49.063680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-20 11:30:49.063708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-20 11:30:49.064084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-20 11:30:49.064114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-20 11:30:49.064504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-20 11:30:49.064534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-20 11:30:49.064896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-20 11:30:49.064926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-20 11:30:49.065196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-20 11:30:49.065227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-20 11:30:49.065594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-20 11:30:49.065623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-20 11:30:49.065863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-20 11:30:49.065891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.561 [2024-11-20 11:30:49.066147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.561 [2024-11-20 11:30:49.066189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.561 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.066589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.066617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.066870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.066899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.067266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.067297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.067668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.067696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.067923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.067952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.068334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.068365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.068730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.068758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.069132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.069168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.069520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.069549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.069916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.069944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.070311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.070340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.070722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.070753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.071178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.071208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.071576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.071604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.071970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.071998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.072338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.072369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.072739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.072767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.073139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.073185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.073591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.073621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.073995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.074024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.074268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.074297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.074640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.074669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.075017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.075047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.075336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.075366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.075747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.075776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.076031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.076059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.076285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.076314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.076687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.076716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.077057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.077088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.077447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.077477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.077817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.077845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.078214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.078242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.078625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.078653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.079007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.079036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.079429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.079458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.079826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.079862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.080189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.562 [2024-11-20 11:30:49.080219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.562 qpair failed and we were unable to recover it. 00:29:56.562 [2024-11-20 11:30:49.080507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.080535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.080747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.080775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.081116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.081144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.081527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.081559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.081944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.081972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.082333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.082364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.082734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.082764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.083109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.083138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.083564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.083594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.083996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.084025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.084382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.084413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.084706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.084734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.085115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.085144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.085512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.085541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.085904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.085932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.086306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.086337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.086606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.086635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.086972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.087000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.087379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.087409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.087799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.087827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.088207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.088237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.088600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.088628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.088994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.089022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.089456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.089486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.089725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.089753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.090119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.090148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.090497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.090526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.090900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.090929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.091181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.091212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.091569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.091597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.091965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.091993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.092396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.092425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.092789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.092819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.093188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.093219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.093473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.093501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.093840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.093868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.094236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.094266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.563 qpair failed and we were unable to recover it. 00:29:56.563 [2024-11-20 11:30:49.094635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.563 [2024-11-20 11:30:49.094662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.094894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.094929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.095238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.095270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.095617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.095649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.096012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.096041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.096409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.096440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.096799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.096827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.097097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.097125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.097521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.097550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.097805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.097838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.098192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.098222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.098588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.098616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.098962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.098990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.099344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.099373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.099802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.099830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.100227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.100261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.100635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.100665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.101030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.101058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.101398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.101429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.101861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.101889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.102123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.102155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.102534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.102564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.102915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.102943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.103208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.103238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.103612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.103640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.104002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.104030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.104386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.104417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.104789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.104817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.105190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.105220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.105580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.105608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.105947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.105975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.106339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.106368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.106732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.106760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.107124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.107153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.107533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.107563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.107928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.107956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.108309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.108340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.108710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.564 [2024-11-20 11:30:49.108738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.564 qpair failed and we were unable to recover it. 00:29:56.564 [2024-11-20 11:30:49.109101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-20 11:30:49.109131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-20 11:30:49.109487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-20 11:30:49.109517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-20 11:30:49.109858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-20 11:30:49.109887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-20 11:30:49.110246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-20 11:30:49.110282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-20 11:30:49.110646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-20 11:30:49.110675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-20 11:30:49.111030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-20 11:30:49.111060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-20 11:30:49.111398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-20 11:30:49.111429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-20 11:30:49.111790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-20 11:30:49.111817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-20 11:30:49.112199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-20 11:30:49.112229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-20 11:30:49.112584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-20 11:30:49.112622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-20 11:30:49.112983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-20 11:30:49.113011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-20 11:30:49.113381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-20 11:30:49.113411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-20 11:30:49.113766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-20 11:30:49.113795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-20 11:30:49.114151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-20 11:30:49.114188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-20 11:30:49.114539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-20 11:30:49.114568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-20 11:30:49.114931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-20 11:30:49.114960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-20 11:30:49.115325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-20 11:30:49.115355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-20 11:30:49.115695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-20 11:30:49.115726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-20 11:30:49.116059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-20 11:30:49.116088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-20 11:30:49.116350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-20 11:30:49.116382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-20 11:30:49.116731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-20 11:30:49.116760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-20 11:30:49.117121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-20 11:30:49.117149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-20 11:30:49.117508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-20 11:30:49.117538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-20 11:30:49.117899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-20 11:30:49.117927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-20 11:30:49.118285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-20 11:30:49.118318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-20 11:30:49.118677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-20 11:30:49.118706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-20 11:30:49.119073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-20 11:30:49.119100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-20 11:30:49.119466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-20 11:30:49.119496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-20 11:30:49.119860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-20 11:30:49.119890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-20 11:30:49.120240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-20 11:30:49.120272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-20 11:30:49.120627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-20 11:30:49.120656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-20 11:30:49.121020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-20 11:30:49.121049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-20 11:30:49.121298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-20 11:30:49.121331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-20 11:30:49.121690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-20 11:30:49.121720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-20 11:30:49.122072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-20 11:30:49.122102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.565 [2024-11-20 11:30:49.122518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.565 [2024-11-20 11:30:49.122548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.565 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-20 11:30:49.122855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-20 11:30:49.122885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-20 11:30:49.123251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-20 11:30:49.123281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-20 11:30:49.123587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-20 11:30:49.123616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-20 11:30:49.123981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-20 11:30:49.124010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-20 11:30:49.124383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-20 11:30:49.124413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-20 11:30:49.124786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-20 11:30:49.124815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-20 11:30:49.125154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-20 11:30:49.125192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-20 11:30:49.125483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-20 11:30:49.125517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-20 11:30:49.125877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-20 11:30:49.125905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-20 11:30:49.126168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-20 11:30:49.126198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-20 11:30:49.126604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-20 11:30:49.126632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-20 11:30:49.126989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-20 11:30:49.127019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-20 11:30:49.127392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-20 11:30:49.127422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-20 11:30:49.127795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-20 11:30:49.127823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-20 11:30:49.128228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-20 11:30:49.128259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-20 11:30:49.128613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-20 11:30:49.128641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-20 11:30:49.129010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-20 11:30:49.129039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-20 11:30:49.129385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-20 11:30:49.129415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-20 11:30:49.129852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-20 11:30:49.129880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-20 11:30:49.130122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-20 11:30:49.130153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-20 11:30:49.130538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-20 11:30:49.130567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-20 11:30:49.130929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-20 11:30:49.130957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-20 11:30:49.131321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-20 11:30:49.131350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-20 11:30:49.131690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-20 11:30:49.131719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-20 11:30:49.132080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-20 11:30:49.132108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-20 11:30:49.132483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-20 11:30:49.132513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-20 11:30:49.132844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-20 11:30:49.132872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-20 11:30:49.133239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-20 11:30:49.133270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-20 11:30:49.133643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-20 11:30:49.133671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-20 11:30:49.134036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-20 11:30:49.134064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-20 11:30:49.134446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-20 11:30:49.134476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-20 11:30:49.134844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-20 11:30:49.134872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-20 11:30:49.135237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-20 11:30:49.135267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-20 11:30:49.135649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.566 [2024-11-20 11:30:49.135676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.566 qpair failed and we were unable to recover it. 00:29:56.566 [2024-11-20 11:30:49.136030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.136060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.136437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.136468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.136700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.136728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.137085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.137114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.137478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.137507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.137871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.137900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.138267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.138298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.138649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.138677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.139018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.139046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.139385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.139415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.139768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.139796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.140171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.140202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.140556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.140585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.140945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.140979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.141340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.141370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.141732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.141759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.142135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.142173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.142575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.142603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.142832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.142864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.143232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.143262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.143525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.143556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.143906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.143935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.144296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.144326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.144688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.144715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.145080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.145109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.145472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.145502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.145854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.145882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.146255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.146285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.146634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.146663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.147017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.147046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.147382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.147413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.147773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.147801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.148048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.148075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.148423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.148453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.148812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.148840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.149082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.149113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.149508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.149538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.567 qpair failed and we were unable to recover it. 00:29:56.567 [2024-11-20 11:30:49.149908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.567 [2024-11-20 11:30:49.149936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.150290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.150320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.150657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.150686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.151049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.151077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.151432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.151462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.151799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.151830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.152190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.152219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.152578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.152606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.152971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.153000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.153331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.153360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.153603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.153631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.153997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.154027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.154373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.154403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.154767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.154797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.155172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.155203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.155546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.155574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.155931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.155966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.156315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.156346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.156714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.156743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.157181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.157211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.157578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.157606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.157975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.158004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.158384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.158413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.158662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.158690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.158979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.159007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.159371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.159402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.159685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.159713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.160063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.160093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.160467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.160497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.160863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.160892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.161253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.161283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.161657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.161685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.161938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.161969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.162233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.162263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.162632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.162660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.163038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.163067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.163410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.163441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.163800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.163828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.568 qpair failed and we were unable to recover it. 00:29:56.568 [2024-11-20 11:30:49.164191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.568 [2024-11-20 11:30:49.164221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.164463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.164493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.164866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.164895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.165235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.165273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.165609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.165638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.165899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.165927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.166318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.166347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.166781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.166809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.167196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.167229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.167569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.167599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.167860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.167888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.168238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.168268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.168594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.168623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.168986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.169014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.169386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.169415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.169781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.169810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.170189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.170218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.170564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.170593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.170949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.170983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.171242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.171272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.171663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.171691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.171909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.171940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.172302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.172333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.172692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.172720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.173080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.173108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.173499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.173529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.173930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.173960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.174320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.174352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.174719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.174748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.175000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.175029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.175394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.175424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.175784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.175813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.176184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.176215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.176457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.176485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.176860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.176888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.177150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.177190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.177588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.177617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.178015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.569 [2024-11-20 11:30:49.178043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.569 qpair failed and we were unable to recover it. 00:29:56.569 [2024-11-20 11:30:49.178404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-20 11:30:49.178434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-20 11:30:49.178670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-20 11:30:49.178701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-20 11:30:49.179068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-20 11:30:49.179097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-20 11:30:49.179475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-20 11:30:49.179504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-20 11:30:49.179749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-20 11:30:49.179777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-20 11:30:49.180141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-20 11:30:49.180183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-20 11:30:49.180537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-20 11:30:49.180565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-20 11:30:49.180927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-20 11:30:49.180957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-20 11:30:49.181309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-20 11:30:49.181341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-20 11:30:49.181587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-20 11:30:49.181615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-20 11:30:49.181987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-20 11:30:49.182015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-20 11:30:49.182387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-20 11:30:49.182418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-20 11:30:49.182773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-20 11:30:49.182800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-20 11:30:49.183194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-20 11:30:49.183225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-20 11:30:49.183465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-20 11:30:49.183493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-20 11:30:49.183840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-20 11:30:49.183868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-20 11:30:49.184238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-20 11:30:49.184268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-20 11:30:49.184716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-20 11:30:49.184745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-20 11:30:49.185107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-20 11:30:49.185136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-20 11:30:49.185507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-20 11:30:49.185537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-20 11:30:49.185905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-20 11:30:49.185940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-20 11:30:49.186303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-20 11:30:49.186332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-20 11:30:49.186673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-20 11:30:49.186702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-20 11:30:49.187084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-20 11:30:49.187112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-20 11:30:49.187465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-20 11:30:49.187495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-20 11:30:49.187865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.570 [2024-11-20 11:30:49.187895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.570 qpair failed and we were unable to recover it. 00:29:56.570 [2024-11-20 11:30:49.188262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.188292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-20 11:30:49.188659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.188687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-20 11:30:49.189043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.189072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-20 11:30:49.189410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.189440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-20 11:30:49.189793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.189822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-20 11:30:49.190188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.190219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-20 11:30:49.190657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.190686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-20 11:30:49.191018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.191047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-20 11:30:49.191413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.191443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-20 11:30:49.191804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.191832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-20 11:30:49.192191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.192223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-20 11:30:49.192620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.192649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-20 11:30:49.192901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.192929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-20 11:30:49.193295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.193325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-20 11:30:49.193702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.193731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-20 11:30:49.194000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.194028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-20 11:30:49.194389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.194420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-20 11:30:49.194787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.194817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-20 11:30:49.195068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.195098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-20 11:30:49.195480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.195509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-20 11:30:49.195881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.195909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-20 11:30:49.196262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.196292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-20 11:30:49.196661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.196690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-20 11:30:49.197060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.197091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-20 11:30:49.197469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.197499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-20 11:30:49.197752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.197781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-20 11:30:49.198038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.198068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-20 11:30:49.198430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.198460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-20 11:30:49.198821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.198849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-20 11:30:49.199212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.199243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-20 11:30:49.199626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.199655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-20 11:30:49.200056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.200085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-20 11:30:49.200462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.200491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-20 11:30:49.200853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.200883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-20 11:30:49.201235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.201272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-20 11:30:49.201438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.201470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.571 [2024-11-20 11:30:49.201874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.571 [2024-11-20 11:30:49.201902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.571 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-20 11:30:49.202337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-20 11:30:49.202368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-20 11:30:49.202709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-20 11:30:49.202739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-20 11:30:49.203096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-20 11:30:49.203123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-20 11:30:49.203508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-20 11:30:49.203537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-20 11:30:49.203902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-20 11:30:49.203929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-20 11:30:49.204303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-20 11:30:49.204333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-20 11:30:49.204697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-20 11:30:49.204726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-20 11:30:49.205089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-20 11:30:49.205117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-20 11:30:49.205466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-20 11:30:49.205495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-20 11:30:49.205856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-20 11:30:49.205885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-20 11:30:49.206253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-20 11:30:49.206285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-20 11:30:49.206639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-20 11:30:49.206667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-20 11:30:49.207034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-20 11:30:49.207062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-20 11:30:49.207430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-20 11:30:49.207459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-20 11:30:49.207822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-20 11:30:49.207852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-20 11:30:49.208214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-20 11:30:49.208244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-20 11:30:49.208614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-20 11:30:49.208641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-20 11:30:49.209009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-20 11:30:49.209037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-20 11:30:49.209397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-20 11:30:49.209427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-20 11:30:49.209792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-20 11:30:49.209820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-20 11:30:49.210187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-20 11:30:49.210217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-20 11:30:49.210596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-20 11:30:49.210625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-20 11:30:49.210993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-20 11:30:49.211022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-20 11:30:49.211387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-20 11:30:49.211417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-20 11:30:49.211778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-20 11:30:49.211806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-20 11:30:49.212177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-20 11:30:49.212207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-20 11:30:49.212550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-20 11:30:49.212581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-20 11:30:49.212940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-20 11:30:49.212968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-20 11:30:49.213334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-20 11:30:49.213363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-20 11:30:49.213703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-20 11:30:49.213731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-20 11:30:49.214095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-20 11:30:49.214123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-20 11:30:49.214485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.572 [2024-11-20 11:30:49.214515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.572 qpair failed and we were unable to recover it. 00:29:56.572 [2024-11-20 11:30:49.214879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-20 11:30:49.214907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-20 11:30:49.215270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-20 11:30:49.215301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-20 11:30:49.215668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-20 11:30:49.215695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-20 11:30:49.216067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-20 11:30:49.216094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-20 11:30:49.216464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-20 11:30:49.216494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-20 11:30:49.216852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-20 11:30:49.216889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-20 11:30:49.217128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-20 11:30:49.217168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-20 11:30:49.217554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-20 11:30:49.217585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-20 11:30:49.217925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-20 11:30:49.217955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-20 11:30:49.218291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-20 11:30:49.218320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-20 11:30:49.218690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-20 11:30:49.218718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-20 11:30:49.219084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-20 11:30:49.219112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-20 11:30:49.219476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-20 11:30:49.219506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-20 11:30:49.219906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-20 11:30:49.219934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-20 11:30:49.220363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-20 11:30:49.220393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-20 11:30:49.220634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-20 11:30:49.220663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-20 11:30:49.221033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-20 11:30:49.221063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-20 11:30:49.221427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-20 11:30:49.221457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-20 11:30:49.221819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-20 11:30:49.221848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-20 11:30:49.222216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-20 11:30:49.222246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-20 11:30:49.222612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-20 11:30:49.222641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-20 11:30:49.223009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-20 11:30:49.223037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-20 11:30:49.223388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-20 11:30:49.223419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-20 11:30:49.223786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-20 11:30:49.223821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-20 11:30:49.224188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-20 11:30:49.224219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-20 11:30:49.224572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-20 11:30:49.224602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-20 11:30:49.224862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-20 11:30:49.224890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-20 11:30:49.225271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-20 11:30:49.225301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-20 11:30:49.225549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-20 11:30:49.225577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-20 11:30:49.225983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-20 11:30:49.226012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-20 11:30:49.226276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-20 11:30:49.226306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-20 11:30:49.226689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-20 11:30:49.226717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-20 11:30:49.227083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-20 11:30:49.227113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-20 11:30:49.227540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-20 11:30:49.227570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-20 11:30:49.227928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-20 11:30:49.227958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-20 11:30:49.228335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.573 [2024-11-20 11:30:49.228365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.573 qpair failed and we were unable to recover it. 00:29:56.573 [2024-11-20 11:30:49.228720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-20 11:30:49.228748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-20 11:30:49.229113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-20 11:30:49.229150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-20 11:30:49.229520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-20 11:30:49.229548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-20 11:30:49.229910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-20 11:30:49.229939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-20 11:30:49.230307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-20 11:30:49.230336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-20 11:30:49.230710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-20 11:30:49.230738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-20 11:30:49.230986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-20 11:30:49.231014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-20 11:30:49.231408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-20 11:30:49.231438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-20 11:30:49.231802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-20 11:30:49.231831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-20 11:30:49.232204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-20 11:30:49.232241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-20 11:30:49.232616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-20 11:30:49.232645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-20 11:30:49.233022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-20 11:30:49.233050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-20 11:30:49.233397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-20 11:30:49.233427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-20 11:30:49.233781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-20 11:30:49.233809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-20 11:30:49.234126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-20 11:30:49.234155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-20 11:30:49.234529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-20 11:30:49.234557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-20 11:30:49.234818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-20 11:30:49.234846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-20 11:30:49.235207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-20 11:30:49.235237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-20 11:30:49.235616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-20 11:30:49.235644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-20 11:30:49.236010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-20 11:30:49.236039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-20 11:30:49.236268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-20 11:30:49.236298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-20 11:30:49.236510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-20 11:30:49.236537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-20 11:30:49.236925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-20 11:30:49.236955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-20 11:30:49.237317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-20 11:30:49.237350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-20 11:30:49.237710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-20 11:30:49.237738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-20 11:30:49.238097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-20 11:30:49.238125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-20 11:30:49.238375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-20 11:30:49.238404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-20 11:30:49.238780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-20 11:30:49.238808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-20 11:30:49.239182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-20 11:30:49.239213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-20 11:30:49.239568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-20 11:30:49.239597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-20 11:30:49.240000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-20 11:30:49.240028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-20 11:30:49.240406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-20 11:30:49.240437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.574 [2024-11-20 11:30:49.240794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.574 [2024-11-20 11:30:49.240823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.574 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-20 11:30:49.241259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-20 11:30:49.241288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-20 11:30:49.241619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-20 11:30:49.241655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-20 11:30:49.242029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-20 11:30:49.242057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-20 11:30:49.242418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-20 11:30:49.242449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-20 11:30:49.242814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-20 11:30:49.242843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-20 11:30:49.243216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-20 11:30:49.243245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-20 11:30:49.243473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-20 11:30:49.243504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-20 11:30:49.243864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-20 11:30:49.243894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-20 11:30:49.244262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-20 11:30:49.244293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-20 11:30:49.244664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-20 11:30:49.244692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-20 11:30:49.245032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-20 11:30:49.245060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-20 11:30:49.245399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-20 11:30:49.245429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-20 11:30:49.245788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-20 11:30:49.245817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-20 11:30:49.246187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-20 11:30:49.246217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-20 11:30:49.246468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-20 11:30:49.246496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-20 11:30:49.246743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-20 11:30:49.246771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-20 11:30:49.247185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-20 11:30:49.247221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-20 11:30:49.247557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-20 11:30:49.247585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-20 11:30:49.247947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-20 11:30:49.247977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-20 11:30:49.248327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-20 11:30:49.248360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-20 11:30:49.248721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-20 11:30:49.248749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-20 11:30:49.249003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-20 11:30:49.249031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-20 11:30:49.249394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-20 11:30:49.249424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-20 11:30:49.249664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-20 11:30:49.249695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-20 11:30:49.250054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-20 11:30:49.250082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-20 11:30:49.250427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-20 11:30:49.250456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-20 11:30:49.250817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-20 11:30:49.250845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-20 11:30:49.251211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-20 11:30:49.251241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-20 11:30:49.251613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-20 11:30:49.251642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-20 11:30:49.252019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-20 11:30:49.252048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-20 11:30:49.252388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-20 11:30:49.252419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.575 qpair failed and we were unable to recover it. 00:29:56.575 [2024-11-20 11:30:49.252782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.575 [2024-11-20 11:30:49.252811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-20 11:30:49.253170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-20 11:30:49.253202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-20 11:30:49.253526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-20 11:30:49.253554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-20 11:30:49.253925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-20 11:30:49.253953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-20 11:30:49.254223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-20 11:30:49.254252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-20 11:30:49.254533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-20 11:30:49.254561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-20 11:30:49.254896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-20 11:30:49.254924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-20 11:30:49.255281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-20 11:30:49.255312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-20 11:30:49.255720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-20 11:30:49.255748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-20 11:30:49.256124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-20 11:30:49.256152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-20 11:30:49.256530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-20 11:30:49.256559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-20 11:30:49.256931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-20 11:30:49.256959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-20 11:30:49.257325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-20 11:30:49.257358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-20 11:30:49.257714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-20 11:30:49.257744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-20 11:30:49.258108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-20 11:30:49.258136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-20 11:30:49.258549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-20 11:30:49.258578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-20 11:30:49.258952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-20 11:30:49.258981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-20 11:30:49.259218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-20 11:30:49.259251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-20 11:30:49.259496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-20 11:30:49.259523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-20 11:30:49.259869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-20 11:30:49.259899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-20 11:30:49.260260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-20 11:30:49.260291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-20 11:30:49.260654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-20 11:30:49.260683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-20 11:30:49.260989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-20 11:30:49.261019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-20 11:30:49.261390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-20 11:30:49.261421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-20 11:30:49.261756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-20 11:30:49.261785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-20 11:30:49.262191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-20 11:30:49.262222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-20 11:30:49.262578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-20 11:30:49.262607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-20 11:30:49.262976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-20 11:30:49.263004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-20 11:30:49.263370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-20 11:30:49.263400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-20 11:30:49.263761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-20 11:30:49.263789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-20 11:30:49.264139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-20 11:30:49.264178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-20 11:30:49.264541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.576 [2024-11-20 11:30:49.264570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.576 qpair failed and we were unable to recover it. 00:29:56.576 [2024-11-20 11:30:49.264937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-20 11:30:49.264966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-20 11:30:49.265325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-20 11:30:49.265354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-20 11:30:49.265712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-20 11:30:49.265741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-20 11:30:49.266117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-20 11:30:49.266148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-20 11:30:49.266515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-20 11:30:49.266544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-20 11:30:49.266902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-20 11:30:49.266931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-20 11:30:49.267228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-20 11:30:49.267258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-20 11:30:49.267614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-20 11:30:49.267644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-20 11:30:49.268006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-20 11:30:49.268034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-20 11:30:49.268388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-20 11:30:49.268418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-20 11:30:49.268774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-20 11:30:49.268803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-20 11:30:49.269175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-20 11:30:49.269205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-20 11:30:49.269547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-20 11:30:49.269577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-20 11:30:49.269942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-20 11:30:49.269971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-20 11:30:49.270335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-20 11:30:49.270366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-20 11:30:49.270732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-20 11:30:49.270760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-20 11:30:49.271100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-20 11:30:49.271130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-20 11:30:49.271495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-20 11:30:49.271526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-20 11:30:49.271880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-20 11:30:49.271910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-20 11:30:49.272278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-20 11:30:49.272308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-20 11:30:49.272682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-20 11:30:49.272717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-20 11:30:49.273062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-20 11:30:49.273092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-20 11:30:49.273487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-20 11:30:49.273517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-20 11:30:49.273901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-20 11:30:49.273930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-20 11:30:49.274294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-20 11:30:49.274324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-20 11:30:49.274687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-20 11:30:49.274715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-20 11:30:49.274970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-20 11:30:49.274999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-20 11:30:49.275372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-20 11:30:49.275402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-20 11:30:49.275690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-20 11:30:49.275718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-20 11:30:49.276089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-20 11:30:49.276118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-20 11:30:49.276486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-20 11:30:49.276517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-20 11:30:49.276873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-20 11:30:49.276902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-20 11:30:49.277266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-20 11:30:49.277295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-20 11:30:49.277571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-20 11:30:49.277599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-20 11:30:49.277953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-20 11:30:49.277983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-20 11:30:49.278361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.577 [2024-11-20 11:30:49.278391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.577 qpair failed and we were unable to recover it. 00:29:56.577 [2024-11-20 11:30:49.278753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-20 11:30:49.278781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-20 11:30:49.279128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-20 11:30:49.279156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-20 11:30:49.279524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-20 11:30:49.279552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-20 11:30:49.279994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-20 11:30:49.280024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-20 11:30:49.280383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-20 11:30:49.280413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-20 11:30:49.280774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-20 11:30:49.280804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-20 11:30:49.281179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-20 11:30:49.281209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-20 11:30:49.281471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-20 11:30:49.281499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-20 11:30:49.281850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-20 11:30:49.281879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-20 11:30:49.282254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-20 11:30:49.282285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-20 11:30:49.282542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-20 11:30:49.282571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-20 11:30:49.282923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-20 11:30:49.282952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-20 11:30:49.283321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-20 11:30:49.283350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-20 11:30:49.283710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-20 11:30:49.283738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-20 11:30:49.284099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-20 11:30:49.284128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.578 [2024-11-20 11:30:49.284501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.578 [2024-11-20 11:30:49.284531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.578 qpair failed and we were unable to recover it. 00:29:56.850 [2024-11-20 11:30:49.284905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.850 [2024-11-20 11:30:49.284936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.850 qpair failed and we were unable to recover it. 00:29:56.850 [2024-11-20 11:30:49.285291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.850 [2024-11-20 11:30:49.285323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.850 qpair failed and we were unable to recover it. 00:29:56.850 [2024-11-20 11:30:49.285685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.850 [2024-11-20 11:30:49.285713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.850 qpair failed and we were unable to recover it. 00:29:56.850 [2024-11-20 11:30:49.286081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.850 [2024-11-20 11:30:49.286111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.850 qpair failed and we were unable to recover it. 00:29:56.850 [2024-11-20 11:30:49.286479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.850 [2024-11-20 11:30:49.286509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.850 qpair failed and we were unable to recover it. 00:29:56.850 [2024-11-20 11:30:49.286870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.850 [2024-11-20 11:30:49.286899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.850 qpair failed and we were unable to recover it. 00:29:56.850 [2024-11-20 11:30:49.287285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.850 [2024-11-20 11:30:49.287317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.850 qpair failed and we were unable to recover it. 00:29:56.850 [2024-11-20 11:30:49.287695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.850 [2024-11-20 11:30:49.287723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.850 qpair failed and we were unable to recover it. 00:29:56.850 [2024-11-20 11:30:49.288097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.850 [2024-11-20 11:30:49.288132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.850 qpair failed and we were unable to recover it. 00:29:56.850 [2024-11-20 11:30:49.288506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.850 [2024-11-20 11:30:49.288535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.850 qpair failed and we were unable to recover it. 00:29:56.850 [2024-11-20 11:30:49.288889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.850 [2024-11-20 11:30:49.288917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.850 qpair failed and we were unable to recover it. 00:29:56.850 [2024-11-20 11:30:49.289294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.850 [2024-11-20 11:30:49.289324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.850 qpair failed and we were unable to recover it. 00:29:56.850 [2024-11-20 11:30:49.289670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.850 [2024-11-20 11:30:49.289700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.850 qpair failed and we were unable to recover it. 00:29:56.850 [2024-11-20 11:30:49.290069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.850 [2024-11-20 11:30:49.290097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.850 qpair failed and we were unable to recover it. 00:29:56.850 [2024-11-20 11:30:49.290472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.850 [2024-11-20 11:30:49.290501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.850 qpair failed and we were unable to recover it. 00:29:56.850 [2024-11-20 11:30:49.290871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.850 [2024-11-20 11:30:49.290899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.850 qpair failed and we were unable to recover it. 00:29:56.850 [2024-11-20 11:30:49.291264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.850 [2024-11-20 11:30:49.291296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.850 qpair failed and we were unable to recover it. 00:29:56.850 [2024-11-20 11:30:49.291666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.850 [2024-11-20 11:30:49.291695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.850 qpair failed and we were unable to recover it. 00:29:56.850 [2024-11-20 11:30:49.291950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.850 [2024-11-20 11:30:49.291977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.850 qpair failed and we were unable to recover it. 00:29:56.850 [2024-11-20 11:30:49.292362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.850 [2024-11-20 11:30:49.292392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.850 qpair failed and we were unable to recover it. 00:29:56.850 [2024-11-20 11:30:49.292754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.850 [2024-11-20 11:30:49.292783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.850 qpair failed and we were unable to recover it. 00:29:56.850 [2024-11-20 11:30:49.293154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.850 [2024-11-20 11:30:49.293191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.850 qpair failed and we were unable to recover it. 00:29:56.850 [2024-11-20 11:30:49.293574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.850 [2024-11-20 11:30:49.293604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.850 qpair failed and we were unable to recover it. 00:29:56.850 [2024-11-20 11:30:49.293964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.850 [2024-11-20 11:30:49.293993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.850 qpair failed and we were unable to recover it. 00:29:56.850 [2024-11-20 11:30:49.294341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.850 [2024-11-20 11:30:49.294370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.850 qpair failed and we were unable to recover it. 00:29:56.850 [2024-11-20 11:30:49.294619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.850 [2024-11-20 11:30:49.294651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.850 qpair failed and we were unable to recover it. 00:29:56.850 [2024-11-20 11:30:49.295027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.850 [2024-11-20 11:30:49.295056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.850 qpair failed and we were unable to recover it. 00:29:56.850 [2024-11-20 11:30:49.295492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.850 [2024-11-20 11:30:49.295521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.850 qpair failed and we were unable to recover it. 00:29:56.850 [2024-11-20 11:30:49.295770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.850 [2024-11-20 11:30:49.295798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.850 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.296171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.296202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.296547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.296577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.296947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.296976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.297364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.297395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.297756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.297784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.298149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.298197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.298579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.298610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.298975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.299003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.299272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.299302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.299684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.299713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.300072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.300100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.300447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.300477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.300821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.300850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.301214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.301243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.301597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.301627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.302011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.302039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.302296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.302328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.302693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.302722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.303102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.303129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.303399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.303440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.303821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.303851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.304109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.304137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.304518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.304548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.304890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.304920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.305497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.305533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.305888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.305917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.306266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.306295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.306662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.306691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.307049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.307079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.307445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.307475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.307747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.307775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.308198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.308228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.308690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.308718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.308956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.308987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.309353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.309386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.309723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.309752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.309990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.310019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.851 [2024-11-20 11:30:49.310393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.851 [2024-11-20 11:30:49.310423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.851 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.310686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.310714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.311062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.311091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.311460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.311491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.311830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.311858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.312202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.312234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.312612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.312640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.313009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.313038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.313295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.313327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.313728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.313758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.314156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.314195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.314543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.314572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.314928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.314956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.315319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.315350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.315762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.315791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.316133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.316170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.316419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.316447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.316818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.316846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.317214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.317244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.317614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.317643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.318012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.318040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.318309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.318339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.318686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.318723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.319111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.319139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.319526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.319555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.319925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.319953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.320191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.320222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.320600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.320630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.320998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.321026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.321414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.321444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.321676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.321704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.322077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.322105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.322363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.322392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.322751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.322780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.323166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.323196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.323549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.323578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.323955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.323984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.324367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.324397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.852 qpair failed and we were unable to recover it. 00:29:56.852 [2024-11-20 11:30:49.324759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.852 [2024-11-20 11:30:49.324788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.325191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.325222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.325607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.325636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.326003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.326032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.326394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.326424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.326783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.326810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.327182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.327214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.327503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.327531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.327899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.327927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.328328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.328359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.328728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.328756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.329128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.329178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.329550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.329579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.329797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.329824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.330184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.330214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.330489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.330517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.330777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.330809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.331198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.331227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.331623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.331650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.332035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.332063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.332461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.332490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.332870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.332898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.333244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.333274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.333533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.333561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.333927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.333962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.334321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.334353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.334719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.334748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.335115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.335142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.335529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.335557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.335928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.335956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.336329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.336359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.336592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.336621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.337013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.337041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.337385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.337416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.337771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.337799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.338148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.338190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.338567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.338595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.338982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.853 [2024-11-20 11:30:49.339011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.853 qpair failed and we were unable to recover it. 00:29:56.853 [2024-11-20 11:30:49.339382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.854 [2024-11-20 11:30:49.339412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-20 11:30:49.339755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.854 [2024-11-20 11:30:49.339783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-20 11:30:49.340174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.854 [2024-11-20 11:30:49.340205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-20 11:30:49.340647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.854 [2024-11-20 11:30:49.340677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-20 11:30:49.340927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.854 [2024-11-20 11:30:49.340955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-20 11:30:49.341193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.854 [2024-11-20 11:30:49.341226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-20 11:30:49.341451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.854 [2024-11-20 11:30:49.341479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-20 11:30:49.341752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.854 [2024-11-20 11:30:49.341783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-20 11:30:49.342030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.854 [2024-11-20 11:30:49.342059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-20 11:30:49.342325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.854 [2024-11-20 11:30:49.342357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-20 11:30:49.342631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.854 [2024-11-20 11:30:49.342661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-20 11:30:49.342987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.854 [2024-11-20 11:30:49.343016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-20 11:30:49.343421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.854 [2024-11-20 11:30:49.343451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-20 11:30:49.343785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.854 [2024-11-20 11:30:49.343814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-20 11:30:49.344155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.854 [2024-11-20 11:30:49.344192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-20 11:30:49.344545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.854 [2024-11-20 11:30:49.344573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-20 11:30:49.344945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.854 [2024-11-20 11:30:49.344973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-20 11:30:49.345323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.854 [2024-11-20 11:30:49.345354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-20 11:30:49.345569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.854 [2024-11-20 11:30:49.345598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-20 11:30:49.345995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.854 [2024-11-20 11:30:49.346023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-20 11:30:49.346385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.854 [2024-11-20 11:30:49.346417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-20 11:30:49.346787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.854 [2024-11-20 11:30:49.346815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-20 11:30:49.347191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.854 [2024-11-20 11:30:49.347221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-20 11:30:49.347586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.854 [2024-11-20 11:30:49.347614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-20 11:30:49.347878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.854 [2024-11-20 11:30:49.347906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-20 11:30:49.348261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.854 [2024-11-20 11:30:49.348290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-20 11:30:49.348534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.854 [2024-11-20 11:30:49.348568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-20 11:30:49.348927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.854 [2024-11-20 11:30:49.348956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-20 11:30:49.349330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.854 [2024-11-20 11:30:49.349359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-20 11:30:49.349734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.854 [2024-11-20 11:30:49.349762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-20 11:30:49.350109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.854 [2024-11-20 11:30:49.350137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-20 11:30:49.350505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.854 [2024-11-20 11:30:49.350533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.854 [2024-11-20 11:30:49.350976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.854 [2024-11-20 11:30:49.351005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.854 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.351347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.351378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.351733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.351762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.352019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.352049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.352415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.352446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.352778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.352806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.353181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.353211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.353448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.353476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.353857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.353885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.354252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.354280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.354631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.354659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.355023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.355051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.355422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.355451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.355814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.355842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.356207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.356235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.356640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.356669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.357033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.357061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.357411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.357440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.357806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.357833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.358196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.358225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.358605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.358632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.359001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.359029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.359386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.359415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.359782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.359809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.360121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.360150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.360437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.360466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.360839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.360867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.361236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.361265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.361645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.361673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.362028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.362056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.362406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.362436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.362768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.362796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.363172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.363202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.363576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.363603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.363946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.363981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.364339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.364369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.364722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.364750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.365136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.365173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.855 qpair failed and we were unable to recover it. 00:29:56.855 [2024-11-20 11:30:49.365529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.855 [2024-11-20 11:30:49.365557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.365922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.365950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.366306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.366336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.366698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.366726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.367088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.367117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.367555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.367584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.367928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.367957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.368374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.368404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.368788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.368815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.369179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.369208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.369597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.369626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.369996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.370024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.370443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.370473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.370829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.370858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.371202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.371231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.371600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.371628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.371890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.371918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.372297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.372327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.372557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.372588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.372932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.372962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.373324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.373353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.373710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.373738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.374098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.374126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.374485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.374515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.374859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.374888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.375255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.375283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.375646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.375674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.375934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.375962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.376311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.376340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.376683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.376713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.377075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.377103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.377452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.377481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.377852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.377880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.378248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.378278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.378525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.378556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.378924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.378952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.379312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.379349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.379709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.856 [2024-11-20 11:30:49.379737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.856 qpair failed and we were unable to recover it. 00:29:56.856 [2024-11-20 11:30:49.380103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.380131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.380504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.380535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.380960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.380988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.381340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.381370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.381746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.381774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.382142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.382178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.382545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.382573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.382814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.382846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.383244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.383273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.383613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.383642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.384003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.384031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.384383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.384413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.384775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.384803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.385207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.385236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.385611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.385640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.385992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.386021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.386392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.386421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.386783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.386811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.387153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.387191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.387572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.387600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.387964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.387992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.388356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.388385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.388778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.388806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.389240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.389271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.389634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.389662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.390028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.390058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.390421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.390451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.390821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.390849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.391213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.391243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.391682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.391712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.392069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.392097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.392477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.392507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.392869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.392898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.393269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.393299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.393655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.393684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.394046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.394074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.394426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.394455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.394818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.857 [2024-11-20 11:30:49.394846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.857 qpair failed and we were unable to recover it. 00:29:56.857 [2024-11-20 11:30:49.395205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.395240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.395621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.395649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.396014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.396042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.396419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.396448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.396809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.396838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.397200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.397231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.397609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.397637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.398002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.398030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.398402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.398430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.398842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.398871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.399229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.399258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.399628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.399655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.400039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.400066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.400411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.400441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.400804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.400832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.401197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.401227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.401603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.401630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.401957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.401985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.402353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.402382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.402738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.402767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.403184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.403213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.403559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.403588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.403945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.403974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.404375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.404405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.404774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.404802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.405170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.405200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.405531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.405559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.405935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.405964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.406319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.406350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.406714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.406742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.407104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.407132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.407512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.407540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.407909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.407937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.408297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.408327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.408635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.408662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.409019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.409047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.409389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.409419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.409789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.858 [2024-11-20 11:30:49.409817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.858 qpair failed and we were unable to recover it. 00:29:56.858 [2024-11-20 11:30:49.410103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-20 11:30:49.410132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-20 11:30:49.410460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-20 11:30:49.410489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-20 11:30:49.410850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-20 11:30:49.410885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-20 11:30:49.411232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-20 11:30:49.411262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-20 11:30:49.411613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-20 11:30:49.411648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-20 11:30:49.411891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-20 11:30:49.411919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-20 11:30:49.412213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-20 11:30:49.412242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-20 11:30:49.412590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-20 11:30:49.412618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-20 11:30:49.412992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-20 11:30:49.413020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-20 11:30:49.413401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-20 11:30:49.413430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-20 11:30:49.413848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-20 11:30:49.413876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-20 11:30:49.414215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-20 11:30:49.414245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-20 11:30:49.414627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-20 11:30:49.414654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-20 11:30:49.415033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-20 11:30:49.415060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-20 11:30:49.415418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-20 11:30:49.415449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-20 11:30:49.415809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-20 11:30:49.415837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-20 11:30:49.416203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-20 11:30:49.416232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-20 11:30:49.416586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-20 11:30:49.416614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-20 11:30:49.416981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-20 11:30:49.417009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-20 11:30:49.417255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-20 11:30:49.417283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-20 11:30:49.417657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-20 11:30:49.417685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-20 11:30:49.418062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-20 11:30:49.418090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-20 11:30:49.418451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-20 11:30:49.418479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-20 11:30:49.418839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-20 11:30:49.418867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-20 11:30:49.419202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-20 11:30:49.419232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-20 11:30:49.419612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-20 11:30:49.419641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-20 11:30:49.420012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-20 11:30:49.420041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-20 11:30:49.420386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-20 11:30:49.420416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-20 11:30:49.420786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-20 11:30:49.420815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-20 11:30:49.421180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-20 11:30:49.421211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-20 11:30:49.421571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-20 11:30:49.421599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-20 11:30:49.421957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-20 11:30:49.421985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-20 11:30:49.422344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-20 11:30:49.422373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.859 [2024-11-20 11:30:49.422728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.859 [2024-11-20 11:30:49.422756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.859 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.422987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.423014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.423395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.423425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.423784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.423812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.424267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.424296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.424650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.424678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.425043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.425071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.425451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.425480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.425850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.425877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.426240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.426276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.426632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.426662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.427022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.427049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.427401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.427431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.427796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.427824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.428188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.428218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.428576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.428605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.428956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.428983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.429388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.429417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.429772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.429800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.430157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.430193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.430529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.430557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.430991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.431019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.431387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.431416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.431778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.431806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.432177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.432207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.432567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.432595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.432970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.432998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.433174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.433206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.433610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.433640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.434009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.434037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.434410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.434440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.434801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.434828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.435191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.435221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.435576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.435604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.435948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.435976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.436234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.436264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.436535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.436563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.436905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.860 [2024-11-20 11:30:49.436935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.860 qpair failed and we were unable to recover it. 00:29:56.860 [2024-11-20 11:30:49.437289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.437319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.437669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.437696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.438064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.438092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.438462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.438492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.438851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.438879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.439124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.439154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.439524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.439552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.439872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.439900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.440273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.440303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.440669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.440697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.441052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.441081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.441457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.441492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.441826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.441855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.442224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.442253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.442610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.442638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.443005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.443032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.443386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.443415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.443775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.443804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.444156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.444204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.444525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.444553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.444913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.444941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.445309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.445338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.445628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.445655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.446016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.446044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.446302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.446331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.446733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.446761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.447124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.447153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.447526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.447554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.447916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.447945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.448392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.448422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.448784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.448812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.449171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.449202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.449533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.449562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.449910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.449938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.450181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.450210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.450506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.450535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.450900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.450929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.451194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.861 [2024-11-20 11:30:49.451223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.861 qpair failed and we were unable to recover it. 00:29:56.861 [2024-11-20 11:30:49.451602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.451631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.452001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.452029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.452399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.452430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.452799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.452827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.453263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.453293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.453620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.453649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.454005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.454035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.454384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.454414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.454777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.454814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.455175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.455205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.455548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.455577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.455934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.455963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.456334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.456363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.456735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.456763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.457120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.457149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.457518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.457557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.457915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.457943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.458379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.458409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.458760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.458789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.459146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.459197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.459553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.459582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.459931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.459960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.460326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.460357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.463684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.463784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.464157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.464228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.464585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.464614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.464981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.465010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.465355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.465390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.465729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.465759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.466133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.466171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.466534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.466564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.466923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.466950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.467320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.467349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.467721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.467750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.468190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.468222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.468561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.468591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.468854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.468882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.469260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.862 [2024-11-20 11:30:49.469290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.862 qpair failed and we were unable to recover it. 00:29:56.862 [2024-11-20 11:30:49.469639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-20 11:30:49.469668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-20 11:30:49.469924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-20 11:30:49.469953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-20 11:30:49.470280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-20 11:30:49.470317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-20 11:30:49.470669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-20 11:30:49.470698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-20 11:30:49.471065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-20 11:30:49.471093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-20 11:30:49.471455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-20 11:30:49.471484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-20 11:30:49.471913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-20 11:30:49.471941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-20 11:30:49.472285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-20 11:30:49.472316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-20 11:30:49.472663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-20 11:30:49.472691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-20 11:30:49.473060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-20 11:30:49.473088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-20 11:30:49.473347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-20 11:30:49.473381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-20 11:30:49.473757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-20 11:30:49.473785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-20 11:30:49.474153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-20 11:30:49.474193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-20 11:30:49.474517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-20 11:30:49.474546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-20 11:30:49.474909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-20 11:30:49.474937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-20 11:30:49.475301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-20 11:30:49.475331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-20 11:30:49.475683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-20 11:30:49.475712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-20 11:30:49.476075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-20 11:30:49.476103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-20 11:30:49.476468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-20 11:30:49.476498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-20 11:30:49.476863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-20 11:30:49.476891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-20 11:30:49.477251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-20 11:30:49.477281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-20 11:30:49.477656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-20 11:30:49.477684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-20 11:30:49.478061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-20 11:30:49.478089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-20 11:30:49.478468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-20 11:30:49.478498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-20 11:30:49.478833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-20 11:30:49.478862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-20 11:30:49.479224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-20 11:30:49.479254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-20 11:30:49.479609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-20 11:30:49.479636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-20 11:30:49.480003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-20 11:30:49.480031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-20 11:30:49.480291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-20 11:30:49.480325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-20 11:30:49.480710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-20 11:30:49.480738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-20 11:30:49.481084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-20 11:30:49.481113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-20 11:30:49.481476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-20 11:30:49.481506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-20 11:30:49.481717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-20 11:30:49.481745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-20 11:30:49.482056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.863 [2024-11-20 11:30:49.482085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.863 qpair failed and we were unable to recover it. 00:29:56.863 [2024-11-20 11:30:49.482426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.482458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.482808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.482837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.483189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.483219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.483579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.483608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.483984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.484011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.484353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.484385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.484746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.484774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.485135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.485172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.485530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.485571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.485904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.485932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.486288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.486318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.486684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.486711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.487074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.487102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.487470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.487500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.487860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.487888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.488251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.488282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.488628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.488656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.488929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.488956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.489301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.489330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.489694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.489723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.490058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.490086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.490418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.490449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.490803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.490833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.491270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.491299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.491670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.491698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.492055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.492086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.492436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.492467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.492825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.492854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.493306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.493335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.493680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.493707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.494081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.494110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.494520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.494549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.494908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.494936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.495290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.495320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.495697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.495725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.496094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.496122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.496559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.496590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.496934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.864 [2024-11-20 11:30:49.496963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.864 qpair failed and we were unable to recover it. 00:29:56.864 [2024-11-20 11:30:49.497305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.497335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.497758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.497785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.498119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.498148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.498508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.498537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.498902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.498930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.499305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.499335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.499699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.499727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.500090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.500118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.500492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.500522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.500883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.500911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.501280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.501316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.501668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.501697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.501972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.501999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.502131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.502171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.502576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.502605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.502964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.502993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.503378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.503409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.503770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.503798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.504049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.504076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.504444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.504474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.504835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.504864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.505311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.505341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.505697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.505725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.506112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.506141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.506533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.506563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.506930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.506958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.507302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.507334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.507699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.507727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.508080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.508109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.508456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.508486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.508859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.508889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.509145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.509184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.509569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.509598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.509962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.509991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.510244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.510276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.510551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.510579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.511006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.511034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.865 [2024-11-20 11:30:49.511388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.865 [2024-11-20 11:30:49.511419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.865 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.511789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.511816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.512147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.512184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.512530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.512560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.512922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.512950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.513314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.513344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.513713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.513741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.514120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.514149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.514563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.514593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.514968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.514996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.515340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.515370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.515732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.515761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.516112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.516141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.516591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.516630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.516978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.517007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.517384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.517414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.517772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.517800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.518169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.518200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.518539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.518568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.518955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.518983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.519339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.519369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.519741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.519770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.520129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.520166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.520421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.520449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.520812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.520841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.521214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.521244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.521603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.521631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.521970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.522000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.522362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.522391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.522836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.522865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.523220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.523251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.523615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.523645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.524005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.524033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.524384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.524414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.524782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.524810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.525178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.525209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.525565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.525593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.525965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.525994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.866 qpair failed and we were unable to recover it. 00:29:56.866 [2024-11-20 11:30:49.526357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.866 [2024-11-20 11:30:49.526387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.526642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.526674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.527055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.527084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.527383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.527413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.527740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.527770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.528130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.528168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.528539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.528567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.528977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.529007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.529367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.529398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.529735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.529764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.530126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.530154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.530559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.530588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.530973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.531002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.531365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.531397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.531766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.531796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.532130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.532175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.532558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.532586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.532988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.533017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.533390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.533422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.533780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.533808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.534207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.534238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.534604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.534633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.534988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.535019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.535381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.535413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.535772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.535802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.536053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.536088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.536416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.536446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.536826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.536856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.537214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.537244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.537608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.537638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.537895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.537924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.538189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.538219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.538462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.538492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.538848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.538878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.539110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.539139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.539510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.539540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.539901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.539931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.540374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.540405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.867 [2024-11-20 11:30:49.540761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-11-20 11:30:49.540789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.867 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-20 11:30:49.541154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-20 11:30:49.541194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-20 11:30:49.541565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-20 11:30:49.541594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-20 11:30:49.541940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-20 11:30:49.541969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-20 11:30:49.542412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-20 11:30:49.542443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-20 11:30:49.542786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-20 11:30:49.542816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-20 11:30:49.543187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-20 11:30:49.543218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-20 11:30:49.543565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-20 11:30:49.543595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-20 11:30:49.543917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-20 11:30:49.543946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-20 11:30:49.544321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-20 11:30:49.544352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-20 11:30:49.544731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-20 11:30:49.544760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-20 11:30:49.545109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-20 11:30:49.545138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-20 11:30:49.545505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-20 11:30:49.545535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-20 11:30:49.545900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-20 11:30:49.545929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-20 11:30:49.546184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-20 11:30:49.546214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-20 11:30:49.546583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-20 11:30:49.546613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-20 11:30:49.546981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-20 11:30:49.547011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-20 11:30:49.547443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-20 11:30:49.547479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-20 11:30:49.547822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-20 11:30:49.547853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-20 11:30:49.548202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-20 11:30:49.548233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-20 11:30:49.548599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-20 11:30:49.548628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-20 11:30:49.549007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-20 11:30:49.549037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-20 11:30:49.549412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-20 11:30:49.549442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-20 11:30:49.549824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-20 11:30:49.549854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-20 11:30:49.550238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-20 11:30:49.550269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-20 11:30:49.550498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-20 11:30:49.550530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-20 11:30:49.550891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-20 11:30:49.550921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-20 11:30:49.551179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-20 11:30:49.551209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-20 11:30:49.551615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-20 11:30:49.551643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-20 11:30:49.551961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-20 11:30:49.551991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-20 11:30:49.552427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-20 11:30:49.552457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-20 11:30:49.552819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-20 11:30:49.552848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-20 11:30:49.553221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-20 11:30:49.553250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-20 11:30:49.553620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-11-20 11:30:49.553648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.868 qpair failed and we were unable to recover it. 00:29:56.868 [2024-11-20 11:30:49.554042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.554070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.554455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.554484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.554737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.554766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.555121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.555149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.555524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.555552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.555995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.556024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.556312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.556342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.556707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.556737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.557107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.557136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.557485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.557514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.557793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.557823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.558195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.558225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.558495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.558524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.558889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.558918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.559288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.559318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.559577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.559605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.559956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.559984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.560339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.560369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.560707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.560736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.561042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.561069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.561419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.561449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.561713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.561741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.562109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.562137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.562507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.562549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.562993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.563022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.563389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.563418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.563823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.563851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.564223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.564253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.564624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.564651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.565030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.565059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.565407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.565438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.565797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.565824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.566197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.566227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.566636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.566664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.566990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.567018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.567277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.567308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.567654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.567683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.568037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-11-20 11:30:49.568066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.869 qpair failed and we were unable to recover it. 00:29:56.869 [2024-11-20 11:30:49.568418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-20 11:30:49.568449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-20 11:30:49.568874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-20 11:30:49.568902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-20 11:30:49.569125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-20 11:30:49.569154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-20 11:30:49.569518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-20 11:30:49.569547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-20 11:30:49.569919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-20 11:30:49.569947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-20 11:30:49.570306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-20 11:30:49.570337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-20 11:30:49.570732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-20 11:30:49.570760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-20 11:30:49.571118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-20 11:30:49.571148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-20 11:30:49.571494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-20 11:30:49.571522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-20 11:30:49.571773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-20 11:30:49.571801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-20 11:30:49.572173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-20 11:30:49.572204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-20 11:30:49.572564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-20 11:30:49.572593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-20 11:30:49.573034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-20 11:30:49.573064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-20 11:30:49.573403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-20 11:30:49.573436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-20 11:30:49.573800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-20 11:30:49.573829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-20 11:30:49.574195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-20 11:30:49.574225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-20 11:30:49.574582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-20 11:30:49.574610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-20 11:30:49.574848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-20 11:30:49.574877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-20 11:30:49.575178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-20 11:30:49.575209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-20 11:30:49.575580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-20 11:30:49.575609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-20 11:30:49.575975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-20 11:30:49.576003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-20 11:30:49.576351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-20 11:30:49.576380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-20 11:30:49.576623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-20 11:30:49.576652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-20 11:30:49.577011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-20 11:30:49.577039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-20 11:30:49.577385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-20 11:30:49.577415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:56.870 [2024-11-20 11:30:49.577843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.870 [2024-11-20 11:30:49.577878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:56.870 qpair failed and we were unable to recover it. 00:29:57.142 [2024-11-20 11:30:49.578217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-11-20 11:30:49.578249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-11-20 11:30:49.578626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-11-20 11:30:49.578658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-11-20 11:30:49.578999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-11-20 11:30:49.579028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-11-20 11:30:49.579381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-11-20 11:30:49.579410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-11-20 11:30:49.579781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-11-20 11:30:49.579811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-11-20 11:30:49.580063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.142 [2024-11-20 11:30:49.580091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.142 qpair failed and we were unable to recover it. 00:29:57.142 [2024-11-20 11:30:49.580456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.580485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.580743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.580772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.581128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.581156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.581534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.581563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.581802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.581830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.582227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.582257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.582638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.582667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.583027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.583056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.583420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.583450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.583687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.583715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.584092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.584121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.584499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.584529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.584864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.584893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.585257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.585287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.585665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.585693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.585951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.585979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.586290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.586320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.586544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.586572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.586939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.586968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.587387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.587416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.587767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.587797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.588149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.588186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.588536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.588564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.588937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.588965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.589318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.589347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.589698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.589727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.590000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.590028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.590401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.590432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.590775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.590804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.591063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.591092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.591451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.591483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.591826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.591856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.592194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.592225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.592624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.592660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.593016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.593046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.593431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.593464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.593705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.593734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.143 qpair failed and we were unable to recover it. 00:29:57.143 [2024-11-20 11:30:49.594092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.143 [2024-11-20 11:30:49.594125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.594480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.594512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.594932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.594965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.595306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.595337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.595584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.595618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.595856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.595888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.596234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.596265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.596619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.596652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.597009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.597038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.597382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.597415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.597781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.597812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.598178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.598211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.598457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.598487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.598841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.598872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.599086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.599116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.599511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.599544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.599908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.599939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.600309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.600341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.600696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.600725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.601086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.601118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.601349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.601384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.601741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.601771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.602134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.602184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.602564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.602595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.602954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.602986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.603359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.603391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.603787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.603818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.604180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.604213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.604565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.604595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.604989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.605020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.605391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.605424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.605786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.605817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.606080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.606110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.606353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.606386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.606815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.606845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.607092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.607125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.607517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.607556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.607906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.144 [2024-11-20 11:30:49.607937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.144 qpair failed and we were unable to recover it. 00:29:57.144 [2024-11-20 11:30:49.608299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.608330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.608690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.608721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.609083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.609115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.609377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.609408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.609764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.609794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.610192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.610225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.610599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.610630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.610990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.611020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.611370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.611403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.611761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.611791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.612149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.612188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.612580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.612611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.612979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.613009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.613390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.613423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.613662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.613695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.614047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.614078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.614405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.614436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.614898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.614928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.615283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.615315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.615684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.615715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.616075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.616105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.616471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.616503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.616858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.616890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.617234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.617265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.617615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.617646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.618010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.618041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.618403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.618435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.618780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.618810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.619172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.619206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.619570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.619602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.619952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.619982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.620341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.620375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.620731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.620761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.621123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.621153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.621519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.621551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.621782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.621815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.622148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.622198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.145 [2024-11-20 11:30:49.622560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.145 [2024-11-20 11:30:49.622592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.145 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.622956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.622994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.623341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.623374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.623725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.623754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.624106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.624135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.624536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.624568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.624926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.624956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.625319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.625352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.625729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.625760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.626119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.626151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.626544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.626574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.626932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.626963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.627315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.627347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.627706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.627737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.628096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.628125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.628494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.628526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.628881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.628912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.629271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.629302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.629656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.629688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.630112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.630142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.630500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.630531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.630889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.630920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.631283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.631315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.631675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.631706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.632103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.632134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.632498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.632530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.632885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.632917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.633308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.633340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.633715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.633748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.634096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.634127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.634525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.634558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.634987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.635017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.635345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.635378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.635722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.635753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.636103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.636133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.636512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.636543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.636909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.636941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.637303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.637334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.146 [2024-11-20 11:30:49.637694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.146 [2024-11-20 11:30:49.637724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.146 qpair failed and we were unable to recover it. 00:29:57.147 [2024-11-20 11:30:49.637959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-11-20 11:30:49.637992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-11-20 11:30:49.638343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-11-20 11:30:49.638375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-11-20 11:30:49.638579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-11-20 11:30:49.638624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-11-20 11:30:49.639018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-11-20 11:30:49.639048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-11-20 11:30:49.639415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-11-20 11:30:49.639449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-11-20 11:30:49.639803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-11-20 11:30:49.639834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-11-20 11:30:49.640203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-11-20 11:30:49.640233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-11-20 11:30:49.640625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-11-20 11:30:49.640655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-11-20 11:30:49.641007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-11-20 11:30:49.641039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-11-20 11:30:49.641409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-11-20 11:30:49.641440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-11-20 11:30:49.641806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-11-20 11:30:49.641836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-11-20 11:30:49.642194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-11-20 11:30:49.642226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-11-20 11:30:49.642617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-11-20 11:30:49.642647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-11-20 11:30:49.643007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-11-20 11:30:49.643038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-11-20 11:30:49.643407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-11-20 11:30:49.643439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-11-20 11:30:49.643784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-11-20 11:30:49.643814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-11-20 11:30:49.644179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-11-20 11:30:49.644211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-11-20 11:30:49.644560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-11-20 11:30:49.644592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-11-20 11:30:49.644963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-11-20 11:30:49.644993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-11-20 11:30:49.645358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-11-20 11:30:49.645390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-11-20 11:30:49.645814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-11-20 11:30:49.645845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-11-20 11:30:49.646192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-11-20 11:30:49.646225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-11-20 11:30:49.646657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-11-20 11:30:49.646686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-11-20 11:30:49.647046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-11-20 11:30:49.647077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-11-20 11:30:49.647433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-11-20 11:30:49.647464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-11-20 11:30:49.647827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-11-20 11:30:49.647858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-11-20 11:30:49.648216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-11-20 11:30:49.648247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-11-20 11:30:49.648569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-11-20 11:30:49.648600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-11-20 11:30:49.648840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-11-20 11:30:49.648870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-11-20 11:30:49.649214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-11-20 11:30:49.649247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-11-20 11:30:49.649629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.147 [2024-11-20 11:30:49.649660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.147 qpair failed and we were unable to recover it. 00:29:57.147 [2024-11-20 11:30:49.650013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.650044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.650417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.650450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.650794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.650823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.651194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.651226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.651595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.651625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.651972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.652004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.652363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.652395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.652754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.652786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.653115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.653146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.653520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.653551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.653898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.653928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.654290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.654324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.654716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.654746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.655094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.655125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.655472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.655504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.655856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.655887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.656244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.656275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.656646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.656677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.657035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.657065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.657425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.657458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.657816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.657847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.658201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.658234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.658621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.658651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.659051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.659082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.659441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.659472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.659828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.659859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.660234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.660266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.660629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.660660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.661024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.661056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.661407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.661438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.661861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.661892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.662243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.662275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.662624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.662654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.663080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.663111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.663495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.663529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.663891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.663921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.664277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.664308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.664543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.148 [2024-11-20 11:30:49.664577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.148 qpair failed and we were unable to recover it. 00:29:57.148 [2024-11-20 11:30:49.664922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.664960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.665245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.665276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.665628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.665659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.666018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.666047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.666408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.666441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.666732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.666763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.667110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.667140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.667479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.667510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.667864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.667895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.668256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.668289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.668645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.668676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.669032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.669063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.669408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.669440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.669796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.669827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.670227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.670259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.670609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.670640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.670993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.671023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.671386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.671419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.671776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.671806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.672178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.672209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.672582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.672612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.672973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.673004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.673323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.673353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.673693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.673722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.674074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.674106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.674463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.674496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.674856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.674886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.675253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.675286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.675648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.675680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.676041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.676072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.676431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.676461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.676817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.676849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.677202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.677233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.677615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.677646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.678010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.678040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.678412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.678444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.678835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.678866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.679214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.149 [2024-11-20 11:30:49.679249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.149 qpair failed and we were unable to recover it. 00:29:57.149 [2024-11-20 11:30:49.679606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.679636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.680006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.680037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.680401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.680439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.680798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.680829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.681192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.681223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.681577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.681607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.681942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.681974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.682336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.682367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.682600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.682634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.682991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.683021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.683386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.683420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.683779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.683810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.684040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.684074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.684431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.684462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.684830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.684862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.685229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.685261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.685622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.685654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.686020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.686051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.686405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.686436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.686794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.686826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.687188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.687220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.687578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.687608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.687972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.688003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.688249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.688282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.688515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.688548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.688896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.688928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.689283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.689316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.689680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.689713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.690074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.690107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.690487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.690520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.690871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.690903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.691256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.691289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.691652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.691682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.691927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.691960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.692309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.692340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.692693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.692724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.693089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.693121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.150 [2024-11-20 11:30:49.693478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.150 [2024-11-20 11:30:49.693510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.150 qpair failed and we were unable to recover it. 00:29:57.151 [2024-11-20 11:30:49.693870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-11-20 11:30:49.693900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-11-20 11:30:49.694229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-11-20 11:30:49.694261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-11-20 11:30:49.694629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-11-20 11:30:49.694660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-11-20 11:30:49.695023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-11-20 11:30:49.695053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-11-20 11:30:49.695413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-11-20 11:30:49.695453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-11-20 11:30:49.695802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-11-20 11:30:49.695832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-11-20 11:30:49.696193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-11-20 11:30:49.696224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-11-20 11:30:49.696581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-11-20 11:30:49.696612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-11-20 11:30:49.696973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-11-20 11:30:49.697004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-11-20 11:30:49.697342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-11-20 11:30:49.697376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-11-20 11:30:49.697745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-11-20 11:30:49.697774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-11-20 11:30:49.698122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-11-20 11:30:49.698154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-11-20 11:30:49.698507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-11-20 11:30:49.698537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-11-20 11:30:49.698909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-11-20 11:30:49.698940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-11-20 11:30:49.699299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-11-20 11:30:49.699331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-11-20 11:30:49.699706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-11-20 11:30:49.699736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-11-20 11:30:49.700108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-11-20 11:30:49.700139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-11-20 11:30:49.700538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-11-20 11:30:49.700570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-11-20 11:30:49.700936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-11-20 11:30:49.700967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-11-20 11:30:49.701328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-11-20 11:30:49.701361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-11-20 11:30:49.701641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-11-20 11:30:49.701672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-11-20 11:30:49.701914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-11-20 11:30:49.701946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-11-20 11:30:49.702325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-11-20 11:30:49.702357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-11-20 11:30:49.702644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-11-20 11:30:49.702677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-11-20 11:30:49.703026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-11-20 11:30:49.703057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-11-20 11:30:49.703420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-11-20 11:30:49.703453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-11-20 11:30:49.703821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-11-20 11:30:49.703853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-11-20 11:30:49.704218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-11-20 11:30:49.704250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-11-20 11:30:49.704610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-11-20 11:30:49.704641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-11-20 11:30:49.705005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-11-20 11:30:49.705035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-11-20 11:30:49.705395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-11-20 11:30:49.705430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-11-20 11:30:49.705786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-11-20 11:30:49.705816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-11-20 11:30:49.706178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-11-20 11:30:49.706211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-11-20 11:30:49.706570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-11-20 11:30:49.706600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-11-20 11:30:49.706969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-11-20 11:30:49.707001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.151 qpair failed and we were unable to recover it. 00:29:57.151 [2024-11-20 11:30:49.707343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.151 [2024-11-20 11:30:49.707374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-11-20 11:30:49.707751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-11-20 11:30:49.707781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-11-20 11:30:49.708142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-11-20 11:30:49.708180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-11-20 11:30:49.708553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-11-20 11:30:49.708584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-11-20 11:30:49.708940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-11-20 11:30:49.708971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-11-20 11:30:49.709332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-11-20 11:30:49.709365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-11-20 11:30:49.709723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-11-20 11:30:49.709753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-11-20 11:30:49.710192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-11-20 11:30:49.710224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-11-20 11:30:49.710575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-11-20 11:30:49.710607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-11-20 11:30:49.710955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-11-20 11:30:49.710993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-11-20 11:30:49.711346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-11-20 11:30:49.711378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-11-20 11:30:49.711611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-11-20 11:30:49.711643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-11-20 11:30:49.712014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-11-20 11:30:49.712046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-11-20 11:30:49.712406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-11-20 11:30:49.712438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-11-20 11:30:49.712859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-11-20 11:30:49.712889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-11-20 11:30:49.713276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-11-20 11:30:49.713308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-11-20 11:30:49.713667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-11-20 11:30:49.713700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-11-20 11:30:49.714032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-11-20 11:30:49.714062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-11-20 11:30:49.714423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-11-20 11:30:49.714456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-11-20 11:30:49.714833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-11-20 11:30:49.714865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-11-20 11:30:49.715225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-11-20 11:30:49.715258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-11-20 11:30:49.715517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-11-20 11:30:49.715551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-11-20 11:30:49.715938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-11-20 11:30:49.715968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-11-20 11:30:49.716215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-11-20 11:30:49.716247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-11-20 11:30:49.716623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-11-20 11:30:49.716654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-11-20 11:30:49.717031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-11-20 11:30:49.717061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-11-20 11:30:49.717406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-11-20 11:30:49.717438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-11-20 11:30:49.717805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-11-20 11:30:49.717835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-11-20 11:30:49.718066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-11-20 11:30:49.718095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-11-20 11:30:49.718471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-11-20 11:30:49.718502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-11-20 11:30:49.718866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-11-20 11:30:49.718898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-11-20 11:30:49.719239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-11-20 11:30:49.719273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-11-20 11:30:49.719665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-11-20 11:30:49.719695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-11-20 11:30:49.720059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-11-20 11:30:49.720088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-11-20 11:30:49.720441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-11-20 11:30:49.720475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.152 [2024-11-20 11:30:49.720881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.152 [2024-11-20 11:30:49.720912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.152 qpair failed and we were unable to recover it. 00:29:57.153 [2024-11-20 11:30:49.721280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.153 [2024-11-20 11:30:49.721312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.153 qpair failed and we were unable to recover it. 00:29:57.153 [2024-11-20 11:30:49.721575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.153 [2024-11-20 11:30:49.721605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.153 qpair failed and we were unable to recover it. 00:29:57.153 [2024-11-20 11:30:49.721839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.153 [2024-11-20 11:30:49.721872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.153 qpair failed and we were unable to recover it. 00:29:57.153 [2024-11-20 11:30:49.722231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.153 [2024-11-20 11:30:49.722263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.153 qpair failed and we were unable to recover it. 00:29:57.153 [2024-11-20 11:30:49.722646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.153 [2024-11-20 11:30:49.722677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.153 qpair failed and we were unable to recover it. 00:29:57.153 [2024-11-20 11:30:49.723042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.153 [2024-11-20 11:30:49.723073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.153 qpair failed and we were unable to recover it. 00:29:57.153 [2024-11-20 11:30:49.723460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.153 [2024-11-20 11:30:49.723493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.153 qpair failed and we were unable to recover it. 00:29:57.153 [2024-11-20 11:30:49.723846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.153 [2024-11-20 11:30:49.723878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.153 qpair failed and we were unable to recover it. 00:29:57.153 [2024-11-20 11:30:49.724236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.153 [2024-11-20 11:30:49.724268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.153 qpair failed and we were unable to recover it. 00:29:57.153 [2024-11-20 11:30:49.724654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.153 [2024-11-20 11:30:49.724684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.153 qpair failed and we were unable to recover it. 00:29:57.153 [2024-11-20 11:30:49.725036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.153 [2024-11-20 11:30:49.725068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.153 qpair failed and we were unable to recover it. 00:29:57.153 [2024-11-20 11:30:49.725419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.153 [2024-11-20 11:30:49.725451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.153 qpair failed and we were unable to recover it. 00:29:57.153 [2024-11-20 11:30:49.725810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.153 [2024-11-20 11:30:49.725842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.153 qpair failed and we were unable to recover it. 00:29:57.153 [2024-11-20 11:30:49.726210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.153 [2024-11-20 11:30:49.726248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.153 qpair failed and we were unable to recover it. 00:29:57.153 [2024-11-20 11:30:49.726621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.153 [2024-11-20 11:30:49.726651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.153 qpair failed and we were unable to recover it. 00:29:57.153 [2024-11-20 11:30:49.727017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.153 [2024-11-20 11:30:49.727048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.153 qpair failed and we were unable to recover it. 00:29:57.153 [2024-11-20 11:30:49.727340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.153 [2024-11-20 11:30:49.727373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.153 qpair failed and we were unable to recover it. 00:29:57.153 [2024-11-20 11:30:49.727718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.153 [2024-11-20 11:30:49.727750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.153 qpair failed and we were unable to recover it. 00:29:57.153 [2024-11-20 11:30:49.727992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.153 [2024-11-20 11:30:49.728023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.153 qpair failed and we were unable to recover it. 00:29:57.153 [2024-11-20 11:30:49.728395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.153 [2024-11-20 11:30:49.728426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.153 qpair failed and we were unable to recover it. 00:29:57.153 [2024-11-20 11:30:49.728780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.153 [2024-11-20 11:30:49.728810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.153 qpair failed and we were unable to recover it. 00:29:57.153 [2024-11-20 11:30:49.729150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.153 [2024-11-20 11:30:49.729188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.153 qpair failed and we were unable to recover it. 00:29:57.153 [2024-11-20 11:30:49.729589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.153 [2024-11-20 11:30:49.729620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.153 qpair failed and we were unable to recover it. 00:29:57.153 [2024-11-20 11:30:49.729976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.153 [2024-11-20 11:30:49.730006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.153 qpair failed and we were unable to recover it. 00:29:57.153 [2024-11-20 11:30:49.730393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.153 [2024-11-20 11:30:49.730426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.153 qpair failed and we were unable to recover it. 00:29:57.153 [2024-11-20 11:30:49.730824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.153 [2024-11-20 11:30:49.730855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.153 qpair failed and we were unable to recover it. 00:29:57.153 [2024-11-20 11:30:49.731206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.153 [2024-11-20 11:30:49.731244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.153 qpair failed and we were unable to recover it. 00:29:57.153 [2024-11-20 11:30:49.731619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.153 [2024-11-20 11:30:49.731650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.153 qpair failed and we were unable to recover it. 00:29:57.153 [2024-11-20 11:30:49.732009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.153 [2024-11-20 11:30:49.732042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.153 qpair failed and we were unable to recover it. 00:29:57.153 [2024-11-20 11:30:49.732409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.153 [2024-11-20 11:30:49.732440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.153 qpair failed and we were unable to recover it. 00:29:57.153 [2024-11-20 11:30:49.732797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.153 [2024-11-20 11:30:49.732828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.153 qpair failed and we were unable to recover it. 00:29:57.153 [2024-11-20 11:30:49.733190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.153 [2024-11-20 11:30:49.733224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.153 qpair failed and we were unable to recover it. 00:29:57.153 [2024-11-20 11:30:49.733603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.153 [2024-11-20 11:30:49.733632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.153 qpair failed and we were unable to recover it. 00:29:57.153 [2024-11-20 11:30:49.733996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.153 [2024-11-20 11:30:49.734026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.153 qpair failed and we were unable to recover it. 00:29:57.153 [2024-11-20 11:30:49.734366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.153 [2024-11-20 11:30:49.734399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.153 qpair failed and we were unable to recover it. 00:29:57.153 [2024-11-20 11:30:49.734762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.734793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.154 qpair failed and we were unable to recover it. 00:29:57.154 [2024-11-20 11:30:49.735025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.735053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.154 qpair failed and we were unable to recover it. 00:29:57.154 [2024-11-20 11:30:49.735477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.735509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.154 qpair failed and we were unable to recover it. 00:29:57.154 [2024-11-20 11:30:49.735857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.735888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.154 qpair failed and we were unable to recover it. 00:29:57.154 [2024-11-20 11:30:49.736241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.736274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.154 qpair failed and we were unable to recover it. 00:29:57.154 [2024-11-20 11:30:49.736646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.736676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.154 qpair failed and we were unable to recover it. 00:29:57.154 [2024-11-20 11:30:49.737035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.737066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.154 qpair failed and we were unable to recover it. 00:29:57.154 [2024-11-20 11:30:49.737411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.737445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.154 qpair failed and we were unable to recover it. 00:29:57.154 [2024-11-20 11:30:49.737796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.737826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.154 qpair failed and we were unable to recover it. 00:29:57.154 [2024-11-20 11:30:49.738195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.738229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.154 qpair failed and we were unable to recover it. 00:29:57.154 [2024-11-20 11:30:49.738583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.738615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.154 qpair failed and we were unable to recover it. 00:29:57.154 [2024-11-20 11:30:49.738971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.739001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.154 qpair failed and we were unable to recover it. 00:29:57.154 [2024-11-20 11:30:49.739236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.739271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.154 qpair failed and we were unable to recover it. 00:29:57.154 [2024-11-20 11:30:49.739676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.739707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.154 qpair failed and we were unable to recover it. 00:29:57.154 [2024-11-20 11:30:49.740061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.740093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.154 qpair failed and we were unable to recover it. 00:29:57.154 [2024-11-20 11:30:49.740441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.740472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.154 qpair failed and we were unable to recover it. 00:29:57.154 [2024-11-20 11:30:49.740845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.740877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.154 qpair failed and we were unable to recover it. 00:29:57.154 [2024-11-20 11:30:49.741234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.741266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.154 qpair failed and we were unable to recover it. 00:29:57.154 [2024-11-20 11:30:49.741635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.741672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.154 qpair failed and we were unable to recover it. 00:29:57.154 [2024-11-20 11:30:49.742026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.742057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.154 qpair failed and we were unable to recover it. 00:29:57.154 [2024-11-20 11:30:49.742414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.742447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.154 qpair failed and we were unable to recover it. 00:29:57.154 [2024-11-20 11:30:49.742803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.742833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.154 qpair failed and we were unable to recover it. 00:29:57.154 [2024-11-20 11:30:49.743196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.743229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.154 qpair failed and we were unable to recover it. 00:29:57.154 [2024-11-20 11:30:49.743563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.743595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.154 qpair failed and we were unable to recover it. 00:29:57.154 [2024-11-20 11:30:49.743952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.743981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.154 qpair failed and we were unable to recover it. 00:29:57.154 [2024-11-20 11:30:49.744250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.744281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.154 qpair failed and we were unable to recover it. 00:29:57.154 [2024-11-20 11:30:49.744649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.744680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.154 qpair failed and we were unable to recover it. 00:29:57.154 [2024-11-20 11:30:49.745037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.745067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.154 qpair failed and we were unable to recover it. 00:29:57.154 [2024-11-20 11:30:49.745302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.745336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.154 qpair failed and we were unable to recover it. 00:29:57.154 [2024-11-20 11:30:49.745711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.745742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.154 qpair failed and we were unable to recover it. 00:29:57.154 [2024-11-20 11:30:49.746109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.746139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.154 qpair failed and we were unable to recover it. 00:29:57.154 [2024-11-20 11:30:49.746511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.746543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.154 qpair failed and we were unable to recover it. 00:29:57.154 [2024-11-20 11:30:49.746866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.746898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.154 qpair failed and we were unable to recover it. 00:29:57.154 [2024-11-20 11:30:49.747257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.747290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.154 qpair failed and we were unable to recover it. 00:29:57.154 [2024-11-20 11:30:49.747642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.747673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.154 qpair failed and we were unable to recover it. 00:29:57.154 [2024-11-20 11:30:49.748034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.748065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.154 qpair failed and we were unable to recover it. 00:29:57.154 [2024-11-20 11:30:49.748405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.154 [2024-11-20 11:30:49.748436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.748793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.748824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.749182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.749213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.749571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.749601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.749971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.750002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.750267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.750298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.750644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.750674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.751039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.751069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.751410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.751445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.751831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.751861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.752230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.752262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.752614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.752646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.753008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.753039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.753400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.753431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.753797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.753828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.754193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.754226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.754620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.754650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.755000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.755031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.755394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.755428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.755777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.755808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.756173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.756206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.756576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.756608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.756992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.757028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.757387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.757419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.757647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.757680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.758041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.758071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.758411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.758445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.758800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.758830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.759194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.759226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.759487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.759519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.759866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.759898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.760315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.760345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.760702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.760733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.761097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.761129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.761511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.761544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.761906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.761936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.762143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.762193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.762561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.155 [2024-11-20 11:30:49.762592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.155 qpair failed and we were unable to recover it. 00:29:57.155 [2024-11-20 11:30:49.762940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-20 11:30:49.762970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-20 11:30:49.763331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-20 11:30:49.763365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-20 11:30:49.763721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-20 11:30:49.763750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-20 11:30:49.764108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-20 11:30:49.764138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-20 11:30:49.764544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-20 11:30:49.764576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-20 11:30:49.764937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-20 11:30:49.764968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-20 11:30:49.765339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-20 11:30:49.765372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-20 11:30:49.765739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-20 11:30:49.765769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-20 11:30:49.766129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-20 11:30:49.766179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-20 11:30:49.766565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-20 11:30:49.766595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-20 11:30:49.766962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-20 11:30:49.766994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-20 11:30:49.767428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-20 11:30:49.767460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-20 11:30:49.767809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-20 11:30:49.767840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-20 11:30:49.768199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-20 11:30:49.768232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-20 11:30:49.768630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-20 11:30:49.768661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-20 11:30:49.769008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-20 11:30:49.769038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-20 11:30:49.769401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-20 11:30:49.769434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-20 11:30:49.769786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-20 11:30:49.769816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-20 11:30:49.770183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-20 11:30:49.770214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-20 11:30:49.770574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-20 11:30:49.770604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-20 11:30:49.770965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-20 11:30:49.770995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-20 11:30:49.771250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-20 11:30:49.771282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-20 11:30:49.771667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-20 11:30:49.771696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-20 11:30:49.772040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-20 11:30:49.772070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-20 11:30:49.772334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-20 11:30:49.772371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-20 11:30:49.772733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-20 11:30:49.772762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-20 11:30:49.773009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-20 11:30:49.773042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-20 11:30:49.773374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-20 11:30:49.773407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-20 11:30:49.773768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-20 11:30:49.773800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-20 11:30:49.774150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-20 11:30:49.774199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.156 [2024-11-20 11:30:49.774576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.156 [2024-11-20 11:30:49.774605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.156 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.774968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.774997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.775378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.775411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.775768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.775798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.776154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.776193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.776586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.776617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.776978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.777008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.777302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.777334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.777693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.777725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.778079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.778109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.778472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.778504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.778858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.778891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.779248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.779280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.779643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.779673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.780027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.780059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.780419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.780450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.780813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.780844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.781216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.781246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.781598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.781627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.781835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.781864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.782229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.782260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.782632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.782664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.783022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.783053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.783395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.783427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.783753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.783785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.784144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.784183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.784558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.784588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.784952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.784983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.785227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.785259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.785689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.785718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.786076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.786108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.786498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.786530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.786904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.786935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.787299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.787331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.787693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.787730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.788069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.788098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.788473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.788504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.157 [2024-11-20 11:30:49.788856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.157 [2024-11-20 11:30:49.788886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.157 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-20 11:30:49.789250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-20 11:30:49.789282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-20 11:30:49.789538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-20 11:30:49.789569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-20 11:30:49.789936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-20 11:30:49.789966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-20 11:30:49.790320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-20 11:30:49.790351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-20 11:30:49.790709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-20 11:30:49.790740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-20 11:30:49.791167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-20 11:30:49.791200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-20 11:30:49.791578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-20 11:30:49.791610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-20 11:30:49.791880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-20 11:30:49.791910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-20 11:30:49.792263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-20 11:30:49.792294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-20 11:30:49.792634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-20 11:30:49.792665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-20 11:30:49.793057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-20 11:30:49.793087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-20 11:30:49.793513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-20 11:30:49.793545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-20 11:30:49.793897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-20 11:30:49.793930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-20 11:30:49.794206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-20 11:30:49.794238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-20 11:30:49.794605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-20 11:30:49.794637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-20 11:30:49.794997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-20 11:30:49.795028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-20 11:30:49.795481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-20 11:30:49.795513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-20 11:30:49.795862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-20 11:30:49.795894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-20 11:30:49.796260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-20 11:30:49.796294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-20 11:30:49.796643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-20 11:30:49.796672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-20 11:30:49.797029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-20 11:30:49.797061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-20 11:30:49.797463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-20 11:30:49.797494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-20 11:30:49.797886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-20 11:30:49.797916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-20 11:30:49.798272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-20 11:30:49.798305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-20 11:30:49.798660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-20 11:30:49.798692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-20 11:30:49.799045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-20 11:30:49.799075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-20 11:30:49.799421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-20 11:30:49.799453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-20 11:30:49.799816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-20 11:30:49.799848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-20 11:30:49.800244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-20 11:30:49.800276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-20 11:30:49.800647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-20 11:30:49.800678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-20 11:30:49.801048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-20 11:30:49.801077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-20 11:30:49.801452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-20 11:30:49.801483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-20 11:30:49.801842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-20 11:30:49.801874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-20 11:30:49.802240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-20 11:30:49.802272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-20 11:30:49.802598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.158 [2024-11-20 11:30:49.802629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.158 qpair failed and we were unable to recover it. 00:29:57.158 [2024-11-20 11:30:49.802981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-20 11:30:49.803011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-20 11:30:49.803392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-20 11:30:49.803430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-20 11:30:49.803683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-20 11:30:49.803713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-20 11:30:49.804069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-20 11:30:49.804101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-20 11:30:49.804473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-20 11:30:49.804507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2925172 Killed "${NVMF_APP[@]}" "$@" 00:29:57.159 [2024-11-20 11:30:49.804881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-20 11:30:49.804915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-20 11:30:49.805283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-20 11:30:49.805315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 11:30:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:57.159 [2024-11-20 11:30:49.805670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-20 11:30:49.805704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 11:30:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:57.159 [2024-11-20 11:30:49.806058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-20 11:30:49.806092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-20 11:30:49.806359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 11:30:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:57.159 [2024-11-20 11:30:49.806392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-20 11:30:49.806628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 11:30:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:57.159 [2024-11-20 11:30:49.806660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 11:30:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:57.159 [2024-11-20 11:30:49.807022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-20 11:30:49.807053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-20 11:30:49.807426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-20 11:30:49.807460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-20 11:30:49.807836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-20 11:30:49.807866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-20 11:30:49.808240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-20 11:30:49.808271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-20 11:30:49.808513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-20 11:30:49.808542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-20 11:30:49.808714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-20 11:30:49.808744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-20 11:30:49.809104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-20 11:30:49.809135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-20 11:30:49.809594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-20 11:30:49.809626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-20 11:30:49.809977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-20 11:30:49.810009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-20 11:30:49.810235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-20 11:30:49.810269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-20 11:30:49.810498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-20 11:30:49.810529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-20 11:30:49.810917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-20 11:30:49.810946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-20 11:30:49.811310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-20 11:30:49.811343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-20 11:30:49.811562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-20 11:30:49.811594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-20 11:30:49.811948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-20 11:30:49.811988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-20 11:30:49.812338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-20 11:30:49.812370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-20 11:30:49.812752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-20 11:30:49.812784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-20 11:30:49.813137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-20 11:30:49.813176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-20 11:30:49.813424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-20 11:30:49.813454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-20 11:30:49.813829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-20 11:30:49.813861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-20 11:30:49.814215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-20 11:30:49.814247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-20 11:30:49.814488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.159 [2024-11-20 11:30:49.814516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.159 qpair failed and we were unable to recover it. 00:29:57.159 [2024-11-20 11:30:49.814866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-20 11:30:49.814897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 11:30:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2926206 00:29:57.160 [2024-11-20 11:30:49.815238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-20 11:30:49.815271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 11:30:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2926206 00:29:57.160 [2024-11-20 11:30:49.815648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-20 11:30:49.815680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 11:30:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:57.160 11:30:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2926206 ']' 00:29:57.160 11:30:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.160 [2024-11-20 11:30:49.816043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-20 11:30:49.816082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 11:30:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:57.160 [2024-11-20 11:30:49.816494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-20 11:30:49.816527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.160 11:30:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.160 11:30:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:57.160 [2024-11-20 11:30:49.816886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-20 11:30:49.816920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 11:30:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:57.160 [2024-11-20 11:30:49.817189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-20 11:30:49.817237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-20 11:30:49.817646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-20 11:30:49.817675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-20 11:30:49.818030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-20 11:30:49.818062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-20 11:30:49.818408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-20 11:30:49.818441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-20 11:30:49.818791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-20 11:30:49.818822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-20 11:30:49.819190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-20 11:30:49.819223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-20 11:30:49.819604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-20 11:30:49.819635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-20 11:30:49.820011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-20 11:30:49.820040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-20 11:30:49.820392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-20 11:30:49.820432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-20 11:30:49.820783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-20 11:30:49.820814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-20 11:30:49.821178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-20 11:30:49.821209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-20 11:30:49.821583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-20 11:30:49.821613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-20 11:30:49.821871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-20 11:30:49.821909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-20 11:30:49.822240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-20 11:30:49.822272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-20 11:30:49.822631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-20 11:30:49.822663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-20 11:30:49.823023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-20 11:30:49.823054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-20 11:30:49.823434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-20 11:30:49.823467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-20 11:30:49.823891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-20 11:30:49.823922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-20 11:30:49.824281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-20 11:30:49.824312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-20 11:30:49.824681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-20 11:30:49.824713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-20 11:30:49.825079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-20 11:30:49.825112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-20 11:30:49.825457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-20 11:30:49.825489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-20 11:30:49.825846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-20 11:30:49.825877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-20 11:30:49.826281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-20 11:30:49.826314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-20 11:30:49.826682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-20 11:30:49.826719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-20 11:30:49.826946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.160 [2024-11-20 11:30:49.826976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.160 qpair failed and we were unable to recover it. 00:29:57.160 [2024-11-20 11:30:49.827325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-20 11:30:49.827362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-20 11:30:49.827732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-20 11:30:49.827763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-20 11:30:49.828130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-20 11:30:49.828169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-20 11:30:49.828527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-20 11:30:49.828560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-20 11:30:49.828926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-20 11:30:49.828956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-20 11:30:49.829274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-20 11:30:49.829305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-20 11:30:49.829545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-20 11:30:49.829574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-20 11:30:49.829939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-20 11:30:49.829975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-20 11:30:49.830335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-20 11:30:49.830368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-20 11:30:49.830716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-20 11:30:49.830747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-20 11:30:49.831111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-20 11:30:49.831142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-20 11:30:49.831507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-20 11:30:49.831538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-20 11:30:49.831921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-20 11:30:49.831952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-20 11:30:49.832315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-20 11:30:49.832348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-20 11:30:49.832623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-20 11:30:49.832653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-20 11:30:49.832797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-20 11:30:49.832825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-20 11:30:49.833211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-20 11:30:49.833242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-20 11:30:49.833622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-20 11:30:49.833653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-20 11:30:49.833987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-20 11:30:49.834016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-20 11:30:49.834365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-20 11:30:49.834397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-20 11:30:49.834759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-20 11:30:49.834789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-20 11:30:49.835150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-20 11:30:49.835190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-20 11:30:49.835567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-20 11:30:49.835606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-20 11:30:49.835953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-20 11:30:49.835982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-20 11:30:49.836346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-20 11:30:49.836379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-20 11:30:49.836743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-20 11:30:49.836775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-20 11:30:49.837026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-20 11:30:49.837055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-20 11:30:49.837321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-20 11:30:49.837351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.161 [2024-11-20 11:30:49.837699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.161 [2024-11-20 11:30:49.837730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.161 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.838085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.838114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.838486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.838517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.838882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.838914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.839173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.839205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.839599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.839629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.840006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.840036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.840425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.840457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.840830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.840861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.841114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.841143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.841464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.841494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.841754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.841782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.841916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.841949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.842319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.842350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.842707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.842740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.843111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.843142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.843542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.843575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.843969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.843999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.844337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.844369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.844734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.844766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.845148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.845187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.845491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.845522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.845897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.845928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.846315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.846349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.846743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.846773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.847152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.847194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.847595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.847626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.847979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.848010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.848361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.848394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.848760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.848790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.849042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.849073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.849326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.849357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.849761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.849792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.850030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.850062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.850418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.850455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.850836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.850869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.851245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.851278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.851631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.162 [2024-11-20 11:30:49.851661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.162 qpair failed and we were unable to recover it. 00:29:57.162 [2024-11-20 11:30:49.851898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.851929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.852269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.852303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.852673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.852704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.853059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.853089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.853363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.853393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.853647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.853677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.854047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.854079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.854448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.854482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.854853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.854883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.855242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.855275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.855664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.855697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.855828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.855858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.856189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.856222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.856468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.856497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.856881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.856912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.857273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.857306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.857692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.857721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.858115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.858144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.858523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.858555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.858941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.858972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.859370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.859402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.859777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.859808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.860180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.860212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.860438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.860469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.860803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.860835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.861193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.861225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.861580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.861611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.861906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.861937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.862303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.862335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.862575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.862605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.862995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.863027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.863334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.863367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.863802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.863833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.864183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.864215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.864557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.864588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.864961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.864990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.865244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.865286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.865566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.163 [2024-11-20 11:30:49.865597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.163 qpair failed and we were unable to recover it. 00:29:57.163 [2024-11-20 11:30:49.865855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-20 11:30:49.865885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-20 11:30:49.866245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-20 11:30:49.866276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-20 11:30:49.866641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-20 11:30:49.866672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-20 11:30:49.866916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-20 11:30:49.866945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-20 11:30:49.867325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-20 11:30:49.867356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-20 11:30:49.867728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-20 11:30:49.867759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-20 11:30:49.867994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-20 11:30:49.868023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-20 11:30:49.868242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-20 11:30:49.868272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-20 11:30:49.868657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-20 11:30:49.868687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-20 11:30:49.869060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-20 11:30:49.869091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-20 11:30:49.869485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-20 11:30:49.869517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-20 11:30:49.869694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-20 11:30:49.869724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.164 [2024-11-20 11:30:49.870100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.164 [2024-11-20 11:30:49.870132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.164 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 11:30:49.870537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-11-20 11:30:49.870572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 11:30:49.870697] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:29:57.436 [2024-11-20 11:30:49.870766] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:57.436 [2024-11-20 11:30:49.870964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-11-20 11:30:49.870997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 11:30:49.871446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-11-20 11:30:49.871478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 11:30:49.871838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-11-20 11:30:49.871869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 11:30:49.872120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-11-20 11:30:49.872154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 11:30:49.872534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-11-20 11:30:49.872566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 11:30:49.872928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-11-20 11:30:49.872961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 11:30:49.873335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-11-20 11:30:49.873369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 11:30:49.873735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-11-20 11:30:49.873767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 11:30:49.874121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-11-20 11:30:49.874155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 11:30:49.874451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-11-20 11:30:49.874483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 11:30:49.874873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-11-20 11:30:49.874906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 11:30:49.875148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-11-20 11:30:49.875189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 11:30:49.875571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-11-20 11:30:49.875605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 11:30:49.875827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-11-20 11:30:49.875858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 11:30:49.876276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-11-20 11:30:49.876309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 11:30:49.876766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-11-20 11:30:49.876798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 11:30:49.877170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-11-20 11:30:49.877203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 11:30:49.877584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-11-20 11:30:49.877615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 11:30:49.877986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-11-20 11:30:49.878018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 11:30:49.878360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-11-20 11:30:49.878393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 11:30:49.878760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-11-20 11:30:49.878793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 11:30:49.879146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-11-20 11:30:49.879198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 11:30:49.879322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-11-20 11:30:49.879353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 11:30:49.879722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-11-20 11:30:49.879759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 11:30:49.880123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-11-20 11:30:49.880154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 11:30:49.880536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-11-20 11:30:49.880566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 11:30:49.880928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-11-20 11:30:49.880959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 11:30:49.881327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-11-20 11:30:49.881360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 11:30:49.881721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-11-20 11:30:49.881753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 11:30:49.882118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-11-20 11:30:49.882148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 11:30:49.882405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-11-20 11:30:49.882440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 11:30:49.882831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.436 [2024-11-20 11:30:49.882863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 11:30:49.883240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.883272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.883632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.883663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.884036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.884067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.884432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.884464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.884826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.884858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.885090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.885121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.885397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.885429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.885782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.885812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.886191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.886223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.886625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.886656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.887017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.887047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.887411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.887444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.887797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.887830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.888184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.888217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.888607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.888638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.889010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.889041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.889483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.889515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.889887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.889919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.890285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.890319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.890755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.890786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.890906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.890934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.891305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.891337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.891592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.891622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.891992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.892023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.892388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.892421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.892786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.892816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.893179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.893212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.893572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.893602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.893845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.893877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.894230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.894263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.894638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.894669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.894908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.894948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.895279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.895312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.895698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.895729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.896096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.896126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.896526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.896558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.896920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.896950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.897321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.897353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.897715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.437 [2024-11-20 11:30:49.897745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 11:30:49.898112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 11:30:49.898144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 11:30:49.898391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 11:30:49.898422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 11:30:49.898773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 11:30:49.898803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 11:30:49.899176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 11:30:49.899208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 11:30:49.899559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 11:30:49.899590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 11:30:49.899962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 11:30:49.899992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 11:30:49.900349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 11:30:49.900383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 11:30:49.900732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 11:30:49.900762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 11:30:49.900989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 11:30:49.901018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 11:30:49.901263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 11:30:49.901298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 11:30:49.901738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 11:30:49.901770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 11:30:49.902131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 11:30:49.902170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 11:30:49.902526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 11:30:49.902558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 11:30:49.902791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 11:30:49.902823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 11:30:49.903204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 11:30:49.903236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 11:30:49.903601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 11:30:49.903634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 11:30:49.904008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 11:30:49.904040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 11:30:49.904418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 11:30:49.904451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 11:30:49.904701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 11:30:49.904731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 11:30:49.905119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 11:30:49.905151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 11:30:49.905520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 11:30:49.905554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 11:30:49.905924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 11:30:49.905956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 11:30:49.906336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 11:30:49.906368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 11:30:49.906732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 11:30:49.906764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 11:30:49.907126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 11:30:49.907156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 11:30:49.907527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 11:30:49.907557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 11:30:49.907921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 11:30:49.907951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 11:30:49.908391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 11:30:49.908423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 11:30:49.908780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 11:30:49.908811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 11:30:49.909183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 11:30:49.909215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 11:30:49.909589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 11:30:49.909619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 11:30:49.909878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 11:30:49.909907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 11:30:49.910150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 11:30:49.910199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 11:30:49.910591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 11:30:49.910622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 11:30:49.910983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.438 [2024-11-20 11:30:49.911013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.911394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 11:30:49.911426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.911776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 11:30:49.911808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.912146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 11:30:49.912188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.912576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 11:30:49.912607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.912982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 11:30:49.913013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.913393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 11:30:49.913426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.913791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 11:30:49.913820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.914197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 11:30:49.914230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.914590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 11:30:49.914623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.915016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 11:30:49.915046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.915415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 11:30:49.915447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.915819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 11:30:49.915850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.916214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 11:30:49.916246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.916619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 11:30:49.916650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.917019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 11:30:49.917049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.917300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 11:30:49.917331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.917594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 11:30:49.917625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.917984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 11:30:49.918014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.918391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 11:30:49.918423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.918787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 11:30:49.918816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.919183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 11:30:49.919214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.919452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 11:30:49.919481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.919845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 11:30:49.919875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.920255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 11:30:49.920287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.920537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 11:30:49.920570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.920929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 11:30:49.920958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.921332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 11:30:49.921364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.921725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 11:30:49.921756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.922010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 11:30:49.922040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.922398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 11:30:49.922429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.922790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 11:30:49.922820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.923199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 11:30:49.923231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.923592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 11:30:49.923625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.923990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 11:30:49.924020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.924393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 11:30:49.924425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.924781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.439 [2024-11-20 11:30:49.924811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 11:30:49.925189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-11-20 11:30:49.925222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-11-20 11:30:49.925462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-11-20 11:30:49.925498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-11-20 11:30:49.925864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-11-20 11:30:49.925894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-11-20 11:30:49.926258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-11-20 11:30:49.926288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-11-20 11:30:49.926631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-11-20 11:30:49.926662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-11-20 11:30:49.927028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-11-20 11:30:49.927057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-11-20 11:30:49.927428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-11-20 11:30:49.927459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-11-20 11:30:49.927828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-11-20 11:30:49.927858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-11-20 11:30:49.928227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-11-20 11:30:49.928258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-11-20 11:30:49.928631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-11-20 11:30:49.928661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-11-20 11:30:49.929025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-11-20 11:30:49.929057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-11-20 11:30:49.929428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-11-20 11:30:49.929461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-11-20 11:30:49.929826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-11-20 11:30:49.929857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-11-20 11:30:49.930221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-11-20 11:30:49.930253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-11-20 11:30:49.930625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-11-20 11:30:49.930656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-11-20 11:30:49.931019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-11-20 11:30:49.931051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-11-20 11:30:49.931443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-11-20 11:30:49.931474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-11-20 11:30:49.931811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-11-20 11:30:49.931842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-11-20 11:30:49.932209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-11-20 11:30:49.932243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-11-20 11:30:49.932620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-11-20 11:30:49.932650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-11-20 11:30:49.932874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-11-20 11:30:49.932904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-11-20 11:30:49.933255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-11-20 11:30:49.933288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-11-20 11:30:49.933661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-11-20 11:30:49.933692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-11-20 11:30:49.934057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-11-20 11:30:49.934087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-11-20 11:30:49.934342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-11-20 11:30:49.934376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-11-20 11:30:49.934737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-11-20 11:30:49.934767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-11-20 11:30:49.935148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-11-20 11:30:49.935187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-11-20 11:30:49.935544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-11-20 11:30:49.935576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-11-20 11:30:49.935951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-11-20 11:30:49.935983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-11-20 11:30:49.936418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-11-20 11:30:49.936450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-11-20 11:30:49.936808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-11-20 11:30:49.936838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-11-20 11:30:49.937204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-11-20 11:30:49.937237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-11-20 11:30:49.937469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-11-20 11:30:49.937500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.440 qpair failed and we were unable to recover it. 00:29:57.440 [2024-11-20 11:30:49.937879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.440 [2024-11-20 11:30:49.937909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.938278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.938309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.938537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.938567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.938821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.938851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.939223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.939255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.939608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.939639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.939985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.940016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.940389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.940421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.940781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.940823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.941188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.941221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.941466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.941495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.941849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.941879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.942290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.942322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.942692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.942723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.943089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.943119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.943530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.943562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.943922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.943952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.944311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.944344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.944482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.944515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.944882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.944911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.945278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.945311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.945678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.945708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.945945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.945975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.946230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.946263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.946642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.946672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.947036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.947066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.947518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.947550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.947911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.947943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.948308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.948338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.948692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.948722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.949085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.949117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.949254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.949288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.949615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.949644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.950004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.950035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.950422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.950454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.950758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.950790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.951028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.951059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.951419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.951450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.441 [2024-11-20 11:30:49.951815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.441 [2024-11-20 11:30:49.951845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.441 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.952093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.952122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.952487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.952519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.952886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.952916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.953287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.953319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.953747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.953778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.954146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.954187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.954550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.954581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.954940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.954971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.955217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.955249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.955605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.955642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.956007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.956038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.956420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.956452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.956806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.956837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.957202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.957234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.957586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.957617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.957979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.958009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.958393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.958425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.958781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.958810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.959189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.959221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.959586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.959618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.959982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.960012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.960380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.960411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.960766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.960795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.961058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.961087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.961329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.961360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.961709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.961740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.962117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.962148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.962500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.962531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.962904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.962936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.963297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.963329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.963691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.963722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.964087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.964118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.964547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.964580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.964989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.965020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.965403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.965436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.965783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.965814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.966169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.966202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.966555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.966585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.966846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.442 [2024-11-20 11:30:49.966875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.442 qpair failed and we were unable to recover it. 00:29:57.442 [2024-11-20 11:30:49.967236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-11-20 11:30:49.967268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-11-20 11:30:49.967672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-11-20 11:30:49.967704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-11-20 11:30:49.968060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-11-20 11:30:49.968091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-11-20 11:30:49.968469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-11-20 11:30:49.968501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-11-20 11:30:49.968862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-11-20 11:30:49.968893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-11-20 11:30:49.969125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-11-20 11:30:49.969184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-11-20 11:30:49.969564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-11-20 11:30:49.969595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-11-20 11:30:49.969948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-11-20 11:30:49.969978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-11-20 11:30:49.970338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-11-20 11:30:49.970369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-11-20 11:30:49.970734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-11-20 11:30:49.970764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-11-20 11:30:49.971124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-11-20 11:30:49.971171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-11-20 11:30:49.971546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-11-20 11:30:49.971575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-11-20 11:30:49.971823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-11-20 11:30:49.971852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-11-20 11:30:49.972256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-11-20 11:30:49.972287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-11-20 11:30:49.972532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-11-20 11:30:49.972562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-11-20 11:30:49.972948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-11-20 11:30:49.972977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-11-20 11:30:49.973341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-11-20 11:30:49.973372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-11-20 11:30:49.973750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-11-20 11:30:49.973781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-11-20 11:30:49.973883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:57.443 [2024-11-20 11:30:49.974045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-11-20 11:30:49.974077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-11-20 11:30:49.974365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-11-20 11:30:49.974396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-11-20 11:30:49.974759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-11-20 11:30:49.974791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-11-20 11:30:49.975155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-11-20 11:30:49.975197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-11-20 11:30:49.975531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-11-20 11:30:49.975562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-11-20 11:30:49.975937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-11-20 11:30:49.975974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-11-20 11:30:49.976321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-11-20 11:30:49.976353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-11-20 11:30:49.976711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-11-20 11:30:49.976741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-11-20 11:30:49.977000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-11-20 11:30:49.977031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-11-20 11:30:49.977275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-11-20 11:30:49.977306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-11-20 11:30:49.977694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-11-20 11:30:49.977725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-11-20 11:30:49.978098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-11-20 11:30:49.978129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-11-20 11:30:49.978501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-11-20 11:30:49.978534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-11-20 11:30:49.978762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-11-20 11:30:49.978792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-11-20 11:30:49.979145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-11-20 11:30:49.979183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.443 qpair failed and we were unable to recover it. 00:29:57.443 [2024-11-20 11:30:49.979552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.443 [2024-11-20 11:30:49.979584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.979942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.979974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.980336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.980368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.980739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.980770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.981099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.981130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.981513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.981544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.981901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.981932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.982189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.982221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.982617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.982647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.983014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.983045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.983430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.983462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.983831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.983863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.984238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.984270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.984628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.984659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.985013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.985042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.985387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.985419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.985777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.985810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.986070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.986100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.986462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.986495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.986868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.986900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.987267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.987299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.987545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.987574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.987956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.987986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.988350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.988382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.988634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.988664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.989039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.989068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.989446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.989478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.989849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.989880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.990239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.990271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.990629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.990659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.990896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.990926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.991218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.991250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.991659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.991690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.992042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.992075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.992423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.992457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.992817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.992848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.993218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.993250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.444 qpair failed and we were unable to recover it. 00:29:57.444 [2024-11-20 11:30:49.993668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.444 [2024-11-20 11:30:49.993698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-11-20 11:30:49.993952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-11-20 11:30:49.993982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-11-20 11:30:49.994343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-11-20 11:30:49.994375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-11-20 11:30:49.994762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-11-20 11:30:49.994793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-11-20 11:30:49.995033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-11-20 11:30:49.995062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-11-20 11:30:49.995425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-11-20 11:30:49.995456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-11-20 11:30:49.995822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-11-20 11:30:49.995853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-11-20 11:30:49.996227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-11-20 11:30:49.996260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-11-20 11:30:49.996622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-11-20 11:30:49.996653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-11-20 11:30:49.997017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-11-20 11:30:49.997050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-11-20 11:30:49.997418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-11-20 11:30:49.997453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-11-20 11:30:49.997703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-11-20 11:30:49.997733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-11-20 11:30:49.998088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-11-20 11:30:49.998121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-11-20 11:30:49.998523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-11-20 11:30:49.998557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-11-20 11:30:49.998906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-11-20 11:30:49.998937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-11-20 11:30:49.999306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-11-20 11:30:49.999338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-11-20 11:30:49.999698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-11-20 11:30:49.999729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-11-20 11:30:49.999966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-11-20 11:30:49.999995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-11-20 11:30:50.000354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-11-20 11:30:50.000386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-11-20 11:30:50.000754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-11-20 11:30:50.000786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-11-20 11:30:50.001208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-11-20 11:30:50.001248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-11-20 11:30:50.001614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-11-20 11:30:50.001645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-11-20 11:30:50.002015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-11-20 11:30:50.002049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-11-20 11:30:50.002293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-11-20 11:30:50.002326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-11-20 11:30:50.003315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-11-20 11:30:50.003358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-11-20 11:30:50.003624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-11-20 11:30:50.003654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-11-20 11:30:50.004039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-11-20 11:30:50.004070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-11-20 11:30:50.004313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-11-20 11:30:50.004346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-11-20 11:30:50.004754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-11-20 11:30:50.004785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-11-20 11:30:50.005174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-11-20 11:30:50.005209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-11-20 11:30:50.005587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.445 [2024-11-20 11:30:50.005617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.445 qpair failed and we were unable to recover it. 00:29:57.445 [2024-11-20 11:30:50.005985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-11-20 11:30:50.006016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-11-20 11:30:50.006258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-11-20 11:30:50.006292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-11-20 11:30:50.006687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-11-20 11:30:50.006718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-11-20 11:30:50.007070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-11-20 11:30:50.007101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-11-20 11:30:50.007505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-11-20 11:30:50.007539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-11-20 11:30:50.007913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-11-20 11:30:50.007943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-11-20 11:30:50.008304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-11-20 11:30:50.008337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-11-20 11:30:50.008708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-11-20 11:30:50.008739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-11-20 11:30:50.008992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-11-20 11:30:50.009021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-11-20 11:30:50.009284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-11-20 11:30:50.009320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-11-20 11:30:50.009714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-11-20 11:30:50.009746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-11-20 11:30:50.010110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-11-20 11:30:50.010143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-11-20 11:30:50.010526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-11-20 11:30:50.010557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-11-20 11:30:50.010925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-11-20 11:30:50.010956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-11-20 11:30:50.011289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-11-20 11:30:50.011320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-11-20 11:30:50.011670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-11-20 11:30:50.011701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-11-20 11:30:50.012066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-11-20 11:30:50.012098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-11-20 11:30:50.012449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-11-20 11:30:50.012480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-11-20 11:30:50.012846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-11-20 11:30:50.012876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-11-20 11:30:50.013125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-11-20 11:30:50.013155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-11-20 11:30:50.013503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-11-20 11:30:50.013534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-11-20 11:30:50.013899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-11-20 11:30:50.013930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-11-20 11:30:50.014200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-11-20 11:30:50.014235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-11-20 11:30:50.014565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-11-20 11:30:50.014596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-11-20 11:30:50.014961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-11-20 11:30:50.014993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-11-20 11:30:50.015236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-11-20 11:30:50.015267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-11-20 11:30:50.015636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-11-20 11:30:50.015666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-11-20 11:30:50.016022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-11-20 11:30:50.016053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-11-20 11:30:50.016425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-11-20 11:30:50.016458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-11-20 11:30:50.016809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-11-20 11:30:50.016852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-11-20 11:30:50.017204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-11-20 11:30:50.017236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-11-20 11:30:50.017617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-11-20 11:30:50.017649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-11-20 11:30:50.017988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-11-20 11:30:50.018019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-11-20 11:30:50.018405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-11-20 11:30:50.018437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-11-20 11:30:50.018792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-11-20 11:30:50.018824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-11-20 11:30:50.019196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.446 [2024-11-20 11:30:50.019229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.446 qpair failed and we were unable to recover it. 00:29:57.446 [2024-11-20 11:30:50.019591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-11-20 11:30:50.019623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-11-20 11:30:50.019983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-11-20 11:30:50.020013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-11-20 11:30:50.020369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-11-20 11:30:50.020403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-11-20 11:30:50.020842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-11-20 11:30:50.020872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-11-20 11:30:50.021224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-11-20 11:30:50.021255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-11-20 11:30:50.021621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-11-20 11:30:50.021652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-11-20 11:30:50.022009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-11-20 11:30:50.022039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-11-20 11:30:50.022313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-11-20 11:30:50.022345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-11-20 11:30:50.022727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-11-20 11:30:50.022758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-11-20 11:30:50.023114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-11-20 11:30:50.023145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-11-20 11:30:50.023501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-11-20 11:30:50.023533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-11-20 11:30:50.023843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-11-20 11:30:50.023872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-11-20 11:30:50.024233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-11-20 11:30:50.024265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-11-20 11:30:50.024495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-11-20 11:30:50.024525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-11-20 11:30:50.024919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-11-20 11:30:50.024948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-11-20 11:30:50.025308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-11-20 11:30:50.025339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-11-20 11:30:50.025702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-11-20 11:30:50.025734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-11-20 11:30:50.025985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-11-20 11:30:50.026018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-11-20 11:30:50.026382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-11-20 11:30:50.026415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-11-20 11:30:50.026774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-11-20 11:30:50.026804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-11-20 11:30:50.027041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-11-20 11:30:50.027072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-11-20 11:30:50.027136] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:57.447 [2024-11-20 11:30:50.027192] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:57.447 [2024-11-20 11:30:50.027202] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:57.447 [2024-11-20 11:30:50.027209] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:57.447 [2024-11-20 11:30:50.027216] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:57.447 [2024-11-20 11:30:50.027342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-11-20 11:30:50.027376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-11-20 11:30:50.027734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-11-20 11:30:50.027766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-11-20 11:30:50.028127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-11-20 11:30:50.028170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-11-20 11:30:50.028549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-11-20 11:30:50.028581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-11-20 11:30:50.028953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-11-20 11:30:50.028985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-11-20 11:30:50.029218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-11-20 11:30:50.029251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-11-20 11:30:50.029288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:57.447 [2024-11-20 11:30:50.029516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:57.447 [2024-11-20 11:30:50.029673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-11-20 11:30:50.029670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:57.447 [2024-11-20 11:30:50.029705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-11-20 11:30:50.029671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:57.447 [2024-11-20 11:30:50.030062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-11-20 11:30:50.030093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-11-20 11:30:50.030479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-11-20 11:30:50.030511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-11-20 11:30:50.030853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-11-20 11:30:50.030884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-11-20 11:30:50.031260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-11-20 11:30:50.031293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-11-20 11:30:50.031639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-11-20 11:30:50.031670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-11-20 11:30:50.032026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-11-20 11:30:50.032059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-11-20 11:30:50.032394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-11-20 11:30:50.032426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.447 [2024-11-20 11:30:50.032793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.447 [2024-11-20 11:30:50.032822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.447 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.033192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-11-20 11:30:50.033224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.033595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-11-20 11:30:50.033625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.033961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-11-20 11:30:50.033992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.034242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-11-20 11:30:50.034273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.034733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-11-20 11:30:50.034764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.035120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-11-20 11:30:50.035150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.035525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-11-20 11:30:50.035556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.035885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-11-20 11:30:50.035920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.036070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-11-20 11:30:50.036104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.036417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-11-20 11:30:50.036449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.036818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-11-20 11:30:50.036850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.037106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-11-20 11:30:50.037137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.037407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-11-20 11:30:50.037439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.037831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-11-20 11:30:50.037862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.038130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-11-20 11:30:50.038173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.038557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-11-20 11:30:50.038589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.038961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-11-20 11:30:50.038992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.039338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-11-20 11:30:50.039370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.039724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-11-20 11:30:50.039756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.040103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-11-20 11:30:50.040134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.040520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-11-20 11:30:50.040552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.040836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-11-20 11:30:50.040867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.041098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-11-20 11:30:50.041133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.041430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-11-20 11:30:50.041462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.041859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-11-20 11:30:50.041890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.042258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-11-20 11:30:50.042291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.042680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-11-20 11:30:50.042711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.042948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-11-20 11:30:50.042977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.043218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-11-20 11:30:50.043250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.043592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-11-20 11:30:50.043623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.043993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-11-20 11:30:50.044025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.044151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-11-20 11:30:50.044196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.044635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-11-20 11:30:50.044666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.044796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-11-20 11:30:50.044829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.045243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-11-20 11:30:50.045276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.045518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.448 [2024-11-20 11:30:50.045548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.448 qpair failed and we were unable to recover it. 00:29:57.448 [2024-11-20 11:30:50.045911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.045941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.046304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.046336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.046588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.046618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.046965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.046995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.047372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.047405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.047648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.047679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.048128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.048168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.048408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.048437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.048839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.048870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.049234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.049267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.049592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.049621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.049994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.050031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.050428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.050462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.050812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.050845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.051104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.051135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.051482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.051517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.051884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.051918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.052275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.052309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.052682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.052713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.052982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.053013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.053382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.053416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.053786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.053820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.054059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.054090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.054459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.054492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.054849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.054881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.055240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.055271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.055526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.055556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.055887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.055917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.056303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.056335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.056705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.056736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.057075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.057105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.057331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.057365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.057727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.057757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.058133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.058172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.058541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.058571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.058952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.058984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.059265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.059297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.059663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.059694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.059939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.449 [2024-11-20 11:30:50.059969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.449 qpair failed and we were unable to recover it. 00:29:57.449 [2024-11-20 11:30:50.060328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.060360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.060718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.060750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.061102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.061134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.061505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.061537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.061889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.061921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.062169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.062201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.062347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.062376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.062797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.062830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.063198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.063230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.063589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.063621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.063969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.064001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.064381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.064414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.064766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.064805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.065187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.065218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.065321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.065350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.065734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.065764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.066144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.066185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.066566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.066598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.066863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.066897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.067255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.067288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.067659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.067688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.068054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.068083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.068441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.068474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.068850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.068880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.069118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.069148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.069505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.069537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.069775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.069805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.070074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.070104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.070464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.070498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.070724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.070754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.071177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.071208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.071525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.071557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.071702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.071745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.072101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.072132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.072438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.072470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.072833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.072863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.073251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.073284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.073509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.073539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.073925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.450 [2024-11-20 11:30:50.073955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:57.450 qpair failed and we were unable to recover it. 00:29:57.450 [2024-11-20 11:30:50.074412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.074521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.074771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.074805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.075152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.075201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.075448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.075484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.075858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.075889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.076116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.076147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.076537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.076569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.076678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.076706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.077083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.077114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.077395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.077427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.077798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.077829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.078202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.078235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.078597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.078628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.078842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.078873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.079114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.079146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.079554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.079587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.079854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.079883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.080235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.080268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.080652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.080684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.081042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.081073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.081416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.081447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.081791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.081823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.082222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.082255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.082478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.082508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.082908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.082939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.083309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.083341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.083716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.083747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.084112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.084151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.084327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.084358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.084723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.084754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.085118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.085148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.085367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.085398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.085647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.085679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.086028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.086058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.086447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.086480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.086743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.086776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.087005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.087036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.087286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.087317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.087678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.087711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.088087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.088118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.088513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.088546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.088910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.088943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.089180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.089213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.089542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.451 [2024-11-20 11:30:50.089572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.451 qpair failed and we were unable to recover it. 00:29:57.451 [2024-11-20 11:30:50.089971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.090004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.090330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.090363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.090714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.090745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.091167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.091200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.091419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.091451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.091841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.091873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.092247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.092280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.092655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.092686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.093048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.093081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.093361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.093399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.093500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.093530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.093888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.093920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.094280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.094312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.094536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.094568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.094998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.095030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.095403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.095436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.095786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.095818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.096029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.096060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.096417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.096449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.096814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.096846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.097192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.097226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.097446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.097477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.097728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.097757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.098031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.098065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.098323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.098356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.098604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.098634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.098859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.098890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.099262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.099293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.099664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.099694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.100073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.100107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.100502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.100535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.100758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.100789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.101138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.101179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.101555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.101585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.101947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.101978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.102373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.102404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.102774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.102806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.103167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.103199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.103497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.103528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.103883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.103913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.452 [2024-11-20 11:30:50.104131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.452 [2024-11-20 11:30:50.104175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.452 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.104572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.104603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.104826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.104855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.105143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.105181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.105504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.105536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.105884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.105915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.106201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.106233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.106606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.106636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.106986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.107016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.107230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.107260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.107617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.107647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.107989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.108026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.108395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.108428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.108824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.108854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.109072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.109102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.109391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.109423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.109766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.109797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.110127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.110173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.110561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.110592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.110953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.110984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.111347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.111378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.111639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.111675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.112014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.112044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.112394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.112426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.112619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.112649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.112998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.113028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.113403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.113434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.113790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.113820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.114033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.114062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.114327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.114358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.114711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.114742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.115112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.115143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.115526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.115558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.115933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.115964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.116294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.116326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.116588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.116617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.116963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.116992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.117337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.117370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.117596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.117626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.117886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.117915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.118275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.118307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.118656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.118687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.118920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.118949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.119306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.119338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.453 [2024-11-20 11:30:50.119714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.453 [2024-11-20 11:30:50.119745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.453 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.120112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.120143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.120484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.120516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.120628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.120660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.121040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.121071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.121415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.121449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.121813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.121843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.122056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.122086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.122437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.122477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.122880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.122911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.123239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.123272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.123494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.123529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.123689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.123719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.124090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.124122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.124356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.124388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.124609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.124638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.125005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.125038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.125399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.125432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.125798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.125829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.126048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.126077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.126426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.126457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.126795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.126825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.127179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.127211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.127544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.127574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.127924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.127954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.128321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.128354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.128623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.128653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.129002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.129032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.129415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.129448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.129799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.129830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.130034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.130063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.130416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.130447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.130796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.130828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.131096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.131124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.131412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.131443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.131538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.131575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.131917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.131948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.132314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.132347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.132701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.132732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.133110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.133140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.454 qpair failed and we were unable to recover it. 00:29:57.454 [2024-11-20 11:30:50.133515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.454 [2024-11-20 11:30:50.133546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.133882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.133914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.134254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.134286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.134606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.134637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.134988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.135019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.135406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.135437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.135790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.135820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.136193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.136224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.136469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.136499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.136858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.136888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.137116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.137148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.137498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.137530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.137889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.137920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.138272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.138304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.138529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.138559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.138781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.138812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.139178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.139209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.139445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.139475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.139832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.139863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.140217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.140248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.140634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.140665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.141025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.141055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.141409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.141440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.141843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.141874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.142228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.142260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.142633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.142665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.143018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.143049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.143468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.143499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.143866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.143897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.144252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.144284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.144649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.144680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.144983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.145014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.145337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.145369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.145608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.145636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.146004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.146034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.146405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.146437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.146803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.146841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.147221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.147251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.147650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.147681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.148025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.148053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.148407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.148437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.148796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.455 [2024-11-20 11:30:50.148826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.455 qpair failed and we were unable to recover it. 00:29:57.455 [2024-11-20 11:30:50.149180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.149212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.149423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.149452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.149801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.149831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.150197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.150228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.150569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.150600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.150988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.151019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.151345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.151376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.151726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.151756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.151969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.151999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.152240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.152274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.152637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.152667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.153008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.153038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.153272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.153302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.153687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.153716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.154078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.154110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.154365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.154396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.154813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.154844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.155187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.155220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.155539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.155568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.155921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.155951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.156304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.156336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.156552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.156588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.156957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.156989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.157340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.157371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.157736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.157767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.158108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.158138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.158377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.158410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.158745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.158775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.159115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.159147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.159499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.159530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.159736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.159765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.159978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.160009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.160407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.160439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.160776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.160808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.161145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.161183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.161440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.161471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.161809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.161840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.162199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.162230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.162608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.162638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.162997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.163030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.163392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.456 [2024-11-20 11:30:50.163423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.456 qpair failed and we were unable to recover it. 00:29:57.456 [2024-11-20 11:30:50.163639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.733 [2024-11-20 11:30:50.163668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.733 qpair failed and we were unable to recover it. 00:29:57.733 [2024-11-20 11:30:50.164009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.733 [2024-11-20 11:30:50.164041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.733 qpair failed and we were unable to recover it. 00:29:57.733 [2024-11-20 11:30:50.164407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.733 [2024-11-20 11:30:50.164439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.733 qpair failed and we were unable to recover it. 00:29:57.733 [2024-11-20 11:30:50.164834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.733 [2024-11-20 11:30:50.164865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.733 qpair failed and we were unable to recover it. 00:29:57.733 [2024-11-20 11:30:50.165076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.733 [2024-11-20 11:30:50.165105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.733 qpair failed and we were unable to recover it. 00:29:57.733 [2024-11-20 11:30:50.165352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.733 [2024-11-20 11:30:50.165382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.733 qpair failed and we were unable to recover it. 00:29:57.733 [2024-11-20 11:30:50.165735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.733 [2024-11-20 11:30:50.165765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.733 qpair failed and we were unable to recover it. 00:29:57.733 [2024-11-20 11:30:50.165991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.733 [2024-11-20 11:30:50.166021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.733 qpair failed and we were unable to recover it. 00:29:57.733 [2024-11-20 11:30:50.166308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.733 [2024-11-20 11:30:50.166339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.733 qpair failed and we were unable to recover it. 00:29:57.733 [2024-11-20 11:30:50.166718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.733 [2024-11-20 11:30:50.166750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.733 qpair failed and we were unable to recover it. 00:29:57.733 [2024-11-20 11:30:50.166842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.733 [2024-11-20 11:30:50.166869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.733 qpair failed and we were unable to recover it. 00:29:57.733 [2024-11-20 11:30:50.167178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.733 [2024-11-20 11:30:50.167208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.733 qpair failed and we were unable to recover it. 00:29:57.733 [2024-11-20 11:30:50.167449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.733 [2024-11-20 11:30:50.167482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.733 qpair failed and we were unable to recover it. 00:29:57.733 [2024-11-20 11:30:50.167821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.733 [2024-11-20 11:30:50.167851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.733 qpair failed and we were unable to recover it. 00:29:57.733 [2024-11-20 11:30:50.168201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.733 [2024-11-20 11:30:50.168234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.733 qpair failed and we were unable to recover it. 00:29:57.733 [2024-11-20 11:30:50.168579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.733 [2024-11-20 11:30:50.168609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.733 qpair failed and we were unable to recover it. 00:29:57.733 [2024-11-20 11:30:50.168824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.733 [2024-11-20 11:30:50.168856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.733 qpair failed and we were unable to recover it. 00:29:57.733 [2024-11-20 11:30:50.169193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.733 [2024-11-20 11:30:50.169224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.733 qpair failed and we were unable to recover it. 00:29:57.733 [2024-11-20 11:30:50.169608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.734 [2024-11-20 11:30:50.169639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.734 qpair failed and we were unable to recover it. 00:29:57.734 [2024-11-20 11:30:50.169998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.734 [2024-11-20 11:30:50.170027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.734 qpair failed and we were unable to recover it. 00:29:57.734 [2024-11-20 11:30:50.170394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.734 [2024-11-20 11:30:50.170426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.734 qpair failed and we were unable to recover it. 00:29:57.734 [2024-11-20 11:30:50.170765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.734 [2024-11-20 11:30:50.170801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.734 qpair failed and we were unable to recover it. 00:29:57.734 [2024-11-20 11:30:50.171175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.734 [2024-11-20 11:30:50.171207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.734 qpair failed and we were unable to recover it. 00:29:57.734 [2024-11-20 11:30:50.171544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.734 [2024-11-20 11:30:50.171573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.734 qpair failed and we were unable to recover it. 00:29:57.734 [2024-11-20 11:30:50.171979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.734 [2024-11-20 11:30:50.172009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.734 qpair failed and we were unable to recover it. 00:29:57.734 [2024-11-20 11:30:50.172325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.734 [2024-11-20 11:30:50.172358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.734 qpair failed and we were unable to recover it. 00:29:57.734 [2024-11-20 11:30:50.172702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.734 [2024-11-20 11:30:50.172731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.734 qpair failed and we were unable to recover it. 00:29:57.734 [2024-11-20 11:30:50.172953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.734 [2024-11-20 11:30:50.172981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.734 qpair failed and we were unable to recover it. 00:29:57.734 [2024-11-20 11:30:50.173333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.734 [2024-11-20 11:30:50.173364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.734 qpair failed and we were unable to recover it. 00:29:57.734 [2024-11-20 11:30:50.173704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.734 [2024-11-20 11:30:50.173734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.734 qpair failed and we were unable to recover it. 00:29:57.734 [2024-11-20 11:30:50.174109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.734 [2024-11-20 11:30:50.174138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.734 qpair failed and we were unable to recover it. 00:29:57.734 [2024-11-20 11:30:50.174542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.734 [2024-11-20 11:30:50.174573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.734 qpair failed and we were unable to recover it. 00:29:57.734 [2024-11-20 11:30:50.174931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.734 [2024-11-20 11:30:50.174961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.734 qpair failed and we were unable to recover it. 00:29:57.734 [2024-11-20 11:30:50.175191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.734 [2024-11-20 11:30:50.175220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.734 qpair failed and we were unable to recover it. 00:29:57.734 [2024-11-20 11:30:50.175466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.734 [2024-11-20 11:30:50.175497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.734 qpair failed and we were unable to recover it. 00:29:57.734 [2024-11-20 11:30:50.175923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.734 [2024-11-20 11:30:50.175954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.734 qpair failed and we were unable to recover it. 00:29:57.734 [2024-11-20 11:30:50.176292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.734 [2024-11-20 11:30:50.176322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.734 qpair failed and we were unable to recover it. 00:29:57.734 [2024-11-20 11:30:50.176721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.734 [2024-11-20 11:30:50.176751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.734 qpair failed and we were unable to recover it. 00:29:57.734 [2024-11-20 11:30:50.177093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.734 [2024-11-20 11:30:50.177123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.734 qpair failed and we were unable to recover it. 00:29:57.734 [2024-11-20 11:30:50.177471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.734 [2024-11-20 11:30:50.177504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.734 qpair failed and we were unable to recover it. 00:29:57.734 [2024-11-20 11:30:50.177858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.734 [2024-11-20 11:30:50.177888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.734 qpair failed and we were unable to recover it. 00:29:57.734 [2024-11-20 11:30:50.178250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.734 [2024-11-20 11:30:50.178281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.734 qpair failed and we were unable to recover it. 00:29:57.734 [2024-11-20 11:30:50.178641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.734 [2024-11-20 11:30:50.178673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.734 qpair failed and we were unable to recover it. 00:29:57.734 [2024-11-20 11:30:50.179022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.734 [2024-11-20 11:30:50.179052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.734 qpair failed and we were unable to recover it. 00:29:57.734 [2024-11-20 11:30:50.179406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.734 [2024-11-20 11:30:50.179437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.734 qpair failed and we were unable to recover it. 00:29:57.734 [2024-11-20 11:30:50.179646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.734 [2024-11-20 11:30:50.179674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.734 qpair failed and we were unable to recover it. 00:29:57.734 [2024-11-20 11:30:50.180038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.734 [2024-11-20 11:30:50.180067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.734 qpair failed and we were unable to recover it. 00:29:57.734 [2024-11-20 11:30:50.180400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.734 [2024-11-20 11:30:50.180432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.734 qpair failed and we were unable to recover it. 00:29:57.734 [2024-11-20 11:30:50.180781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.180816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.181036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.181065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.181400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.181430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.181722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.181752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.182101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.182131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.182503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.182534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.182879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.182909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.183259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.183289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.183638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.183668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.184017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.184046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.184274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.184305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.184655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.184685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.185031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.185061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.185420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.185453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.185808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.185838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.186195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.186228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.186462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.186492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.186845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.186876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.187240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.187271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.187620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.187651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.187996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.188025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.188353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.188385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.188734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.188763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.189119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.189149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.189467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.189498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.189850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.189880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.190228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.190259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.190641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.190672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.191026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.191055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.191412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.191443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.191794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.191824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.192182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.192214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.192556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.192586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.192935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.192966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.193314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.193346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.193562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.193592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.193933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.735 [2024-11-20 11:30:50.193964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.735 qpair failed and we were unable to recover it. 00:29:57.735 [2024-11-20 11:30:50.194182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.194211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.736 qpair failed and we were unable to recover it. 00:29:57.736 [2024-11-20 11:30:50.194570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.194600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.736 qpair failed and we were unable to recover it. 00:29:57.736 [2024-11-20 11:30:50.194808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.194836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.736 qpair failed and we were unable to recover it. 00:29:57.736 [2024-11-20 11:30:50.195192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.195223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.736 qpair failed and we were unable to recover it. 00:29:57.736 [2024-11-20 11:30:50.195573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.195609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.736 qpair failed and we were unable to recover it. 00:29:57.736 [2024-11-20 11:30:50.195863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.195896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.736 qpair failed and we were unable to recover it. 00:29:57.736 [2024-11-20 11:30:50.196134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.196174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.736 qpair failed and we were unable to recover it. 00:29:57.736 [2024-11-20 11:30:50.196482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.196512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.736 qpair failed and we were unable to recover it. 00:29:57.736 [2024-11-20 11:30:50.196734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.196762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.736 qpair failed and we were unable to recover it. 00:29:57.736 [2024-11-20 11:30:50.197094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.197123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.736 qpair failed and we were unable to recover it. 00:29:57.736 [2024-11-20 11:30:50.197494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.197526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.736 qpair failed and we were unable to recover it. 00:29:57.736 [2024-11-20 11:30:50.197872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.197901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.736 qpair failed and we were unable to recover it. 00:29:57.736 [2024-11-20 11:30:50.198250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.198283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.736 qpair failed and we were unable to recover it. 00:29:57.736 [2024-11-20 11:30:50.198644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.198674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.736 qpair failed and we were unable to recover it. 00:29:57.736 [2024-11-20 11:30:50.199016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.199046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.736 qpair failed and we were unable to recover it. 00:29:57.736 [2024-11-20 11:30:50.199412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.199445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.736 qpair failed and we were unable to recover it. 00:29:57.736 [2024-11-20 11:30:50.199665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.199694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.736 qpair failed and we were unable to recover it. 00:29:57.736 [2024-11-20 11:30:50.199819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.199848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.736 qpair failed and we were unable to recover it. 00:29:57.736 [2024-11-20 11:30:50.200223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.200255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.736 qpair failed and we were unable to recover it. 00:29:57.736 [2024-11-20 11:30:50.200598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.200628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.736 qpair failed and we were unable to recover it. 00:29:57.736 [2024-11-20 11:30:50.200976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.201005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.736 qpair failed and we were unable to recover it. 00:29:57.736 [2024-11-20 11:30:50.201379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.201409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.736 qpair failed and we were unable to recover it. 00:29:57.736 [2024-11-20 11:30:50.201750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.201781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.736 qpair failed and we were unable to recover it. 00:29:57.736 [2024-11-20 11:30:50.202132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.202170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.736 qpair failed and we were unable to recover it. 00:29:57.736 [2024-11-20 11:30:50.202473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.202504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.736 qpair failed and we were unable to recover it. 00:29:57.736 [2024-11-20 11:30:50.202731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.202762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.736 qpair failed and we were unable to recover it. 00:29:57.736 [2024-11-20 11:30:50.203120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.203150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.736 qpair failed and we were unable to recover it. 00:29:57.736 [2024-11-20 11:30:50.203483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.203513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.736 qpair failed and we were unable to recover it. 00:29:57.736 [2024-11-20 11:30:50.203863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.203895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.736 qpair failed and we were unable to recover it. 00:29:57.736 [2024-11-20 11:30:50.204236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.204266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.736 qpair failed and we were unable to recover it. 00:29:57.736 [2024-11-20 11:30:50.204641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.204672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.736 qpair failed and we were unable to recover it. 00:29:57.736 [2024-11-20 11:30:50.205007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.205039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.736 qpair failed and we were unable to recover it. 00:29:57.736 [2024-11-20 11:30:50.205417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.205448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.736 qpair failed and we were unable to recover it. 00:29:57.736 [2024-11-20 11:30:50.205797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.205829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.736 qpair failed and we were unable to recover it. 00:29:57.736 [2024-11-20 11:30:50.206185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.206217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.736 qpair failed and we were unable to recover it. 00:29:57.736 [2024-11-20 11:30:50.206573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.206602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.736 qpair failed and we were unable to recover it. 00:29:57.736 [2024-11-20 11:30:50.206962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.736 [2024-11-20 11:30:50.206991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.737 qpair failed and we were unable to recover it. 00:29:57.737 [2024-11-20 11:30:50.207204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.737 [2024-11-20 11:30:50.207235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.737 qpair failed and we were unable to recover it. 00:29:57.737 [2024-11-20 11:30:50.207613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.737 [2024-11-20 11:30:50.207642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.737 qpair failed and we were unable to recover it. 00:29:57.737 [2024-11-20 11:30:50.208002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.737 [2024-11-20 11:30:50.208033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.737 qpair failed and we were unable to recover it. 00:29:57.737 [2024-11-20 11:30:50.208420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.737 [2024-11-20 11:30:50.208452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.737 qpair failed and we were unable to recover it. 00:29:57.737 [2024-11-20 11:30:50.208682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.737 [2024-11-20 11:30:50.208711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.737 qpair failed and we were unable to recover it. 00:29:57.737 [2024-11-20 11:30:50.208955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.737 [2024-11-20 11:30:50.208985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.737 qpair failed and we were unable to recover it. 00:29:57.737 [2024-11-20 11:30:50.209328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.737 [2024-11-20 11:30:50.209360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.737 qpair failed and we were unable to recover it. 00:29:57.737 [2024-11-20 11:30:50.209715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.737 [2024-11-20 11:30:50.209744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.737 qpair failed and we were unable to recover it. 00:29:57.737 [2024-11-20 11:30:50.210083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.737 [2024-11-20 11:30:50.210114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.737 qpair failed and we were unable to recover it. 00:29:57.737 [2024-11-20 11:30:50.210354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.737 [2024-11-20 11:30:50.210387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.737 qpair failed and we were unable to recover it. 00:29:57.737 [2024-11-20 11:30:50.210761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.737 [2024-11-20 11:30:50.210791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.737 qpair failed and we were unable to recover it. 00:29:57.737 [2024-11-20 11:30:50.211130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.737 [2024-11-20 11:30:50.211169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.737 qpair failed and we were unable to recover it. 00:29:57.737 [2024-11-20 11:30:50.211509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.737 [2024-11-20 11:30:50.211538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.737 qpair failed and we were unable to recover it. 00:29:57.737 [2024-11-20 11:30:50.211648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.737 [2024-11-20 11:30:50.211680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.737 qpair failed and we were unable to recover it. 00:29:57.737 [2024-11-20 11:30:50.212014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.737 [2024-11-20 11:30:50.212045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.737 qpair failed and we were unable to recover it. 00:29:57.737 [2024-11-20 11:30:50.212306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.737 [2024-11-20 11:30:50.212337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.737 qpair failed and we were unable to recover it. 00:29:57.737 [2024-11-20 11:30:50.212656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.737 [2024-11-20 11:30:50.212686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.737 qpair failed and we were unable to recover it. 00:29:57.737 [2024-11-20 11:30:50.212976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.737 [2024-11-20 11:30:50.213005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.737 qpair failed and we were unable to recover it. 00:29:57.737 [2024-11-20 11:30:50.213326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.737 [2024-11-20 11:30:50.213356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.737 qpair failed and we were unable to recover it. 00:29:57.737 [2024-11-20 11:30:50.213730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.737 [2024-11-20 11:30:50.213760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.737 qpair failed and we were unable to recover it. 00:29:57.737 [2024-11-20 11:30:50.213982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.737 [2024-11-20 11:30:50.214011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.737 qpair failed and we were unable to recover it. 00:29:57.737 [2024-11-20 11:30:50.214352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.737 [2024-11-20 11:30:50.214383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.737 qpair failed and we were unable to recover it. 00:29:57.737 [2024-11-20 11:30:50.214752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.737 [2024-11-20 11:30:50.214783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.737 qpair failed and we were unable to recover it. 00:29:57.737 [2024-11-20 11:30:50.215134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.737 [2024-11-20 11:30:50.215175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.737 qpair failed and we were unable to recover it. 00:29:57.737 [2024-11-20 11:30:50.215565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.737 [2024-11-20 11:30:50.215596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.737 qpair failed and we were unable to recover it. 00:29:57.737 [2024-11-20 11:30:50.215805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.737 [2024-11-20 11:30:50.215835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.737 qpair failed and we were unable to recover it. 00:29:57.737 [2024-11-20 11:30:50.216189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.737 [2024-11-20 11:30:50.216220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.737 qpair failed and we were unable to recover it. 00:29:57.737 [2024-11-20 11:30:50.216533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.737 [2024-11-20 11:30:50.216565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.737 qpair failed and we were unable to recover it. 00:29:57.737 [2024-11-20 11:30:50.216767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.737 [2024-11-20 11:30:50.216797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.737 qpair failed and we were unable to recover it. 00:29:57.737 [2024-11-20 11:30:50.217002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.737 [2024-11-20 11:30:50.217031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.737 qpair failed and we were unable to recover it. 00:29:57.737 [2024-11-20 11:30:50.217240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.737 [2024-11-20 11:30:50.217272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.737 qpair failed and we were unable to recover it. 00:29:57.737 [2024-11-20 11:30:50.217628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.737 [2024-11-20 11:30:50.217658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.737 qpair failed and we were unable to recover it. 00:29:57.737 [2024-11-20 11:30:50.218021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.737 [2024-11-20 11:30:50.218050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.737 qpair failed and we were unable to recover it. 00:29:57.737 [2024-11-20 11:30:50.218289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.738 [2024-11-20 11:30:50.218321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.738 qpair failed and we were unable to recover it. 00:29:57.738 [2024-11-20 11:30:50.218666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.738 [2024-11-20 11:30:50.218695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.738 qpair failed and we were unable to recover it. 00:29:57.738 [2024-11-20 11:30:50.219047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.738 [2024-11-20 11:30:50.219082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.738 qpair failed and we were unable to recover it. 00:29:57.738 [2024-11-20 11:30:50.219434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.738 [2024-11-20 11:30:50.219464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.738 qpair failed and we were unable to recover it. 00:29:57.738 [2024-11-20 11:30:50.219825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.738 [2024-11-20 11:30:50.219854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.738 qpair failed and we were unable to recover it. 00:29:57.738 [2024-11-20 11:30:50.220069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.738 [2024-11-20 11:30:50.220099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.738 qpair failed and we were unable to recover it. 00:29:57.738 [2024-11-20 11:30:50.220455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.738 [2024-11-20 11:30:50.220486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.738 qpair failed and we were unable to recover it. 00:29:57.738 [2024-11-20 11:30:50.220893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.738 [2024-11-20 11:30:50.220922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.738 qpair failed and we were unable to recover it. 00:29:57.738 [2024-11-20 11:30:50.221264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.738 [2024-11-20 11:30:50.221296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.738 qpair failed and we were unable to recover it. 00:29:57.738 [2024-11-20 11:30:50.221664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.738 [2024-11-20 11:30:50.221694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.738 qpair failed and we were unable to recover it. 00:29:57.738 [2024-11-20 11:30:50.221920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.738 [2024-11-20 11:30:50.221950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.738 qpair failed and we were unable to recover it. 00:29:57.738 [2024-11-20 11:30:50.222297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.738 [2024-11-20 11:30:50.222329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.738 qpair failed and we were unable to recover it. 00:29:57.738 [2024-11-20 11:30:50.222615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.738 [2024-11-20 11:30:50.222644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.738 qpair failed and we were unable to recover it. 00:29:57.738 [2024-11-20 11:30:50.222874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.738 [2024-11-20 11:30:50.222906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.738 qpair failed and we were unable to recover it. 00:29:57.738 [2024-11-20 11:30:50.223258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.738 [2024-11-20 11:30:50.223289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.738 qpair failed and we were unable to recover it. 00:29:57.738 [2024-11-20 11:30:50.223503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.738 [2024-11-20 11:30:50.223531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.738 qpair failed and we were unable to recover it. 00:29:57.738 [2024-11-20 11:30:50.223782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.738 [2024-11-20 11:30:50.223812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.738 qpair failed and we were unable to recover it. 00:29:57.738 [2024-11-20 11:30:50.224050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.738 [2024-11-20 11:30:50.224079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.738 qpair failed and we were unable to recover it. 00:29:57.738 [2024-11-20 11:30:50.224418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.738 [2024-11-20 11:30:50.224449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.738 qpair failed and we were unable to recover it. 00:29:57.738 [2024-11-20 11:30:50.224814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.738 [2024-11-20 11:30:50.224844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.738 qpair failed and we were unable to recover it. 00:29:57.738 [2024-11-20 11:30:50.225201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.738 [2024-11-20 11:30:50.225232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.738 qpair failed and we were unable to recover it. 00:29:57.738 [2024-11-20 11:30:50.225463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.738 [2024-11-20 11:30:50.225492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.738 qpair failed and we were unable to recover it. 00:29:57.738 [2024-11-20 11:30:50.225698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.738 [2024-11-20 11:30:50.225726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.738 qpair failed and we were unable to recover it. 00:29:57.738 [2024-11-20 11:30:50.225969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.738 [2024-11-20 11:30:50.225998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.738 qpair failed and we were unable to recover it. 00:29:57.738 [2024-11-20 11:30:50.226202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.738 [2024-11-20 11:30:50.226232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.738 qpair failed and we were unable to recover it. 00:29:57.738 [2024-11-20 11:30:50.226369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.738 [2024-11-20 11:30:50.226402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.738 qpair failed and we were unable to recover it. 00:29:57.738 [2024-11-20 11:30:50.226620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.738 [2024-11-20 11:30:50.226648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.738 qpair failed and we were unable to recover it. 00:29:57.738 [2024-11-20 11:30:50.227012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.738 [2024-11-20 11:30:50.227042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.738 qpair failed and we were unable to recover it. 00:29:57.738 [2024-11-20 11:30:50.227267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.738 [2024-11-20 11:30:50.227298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.738 qpair failed and we were unable to recover it. 00:29:57.738 [2024-11-20 11:30:50.227650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.738 [2024-11-20 11:30:50.227679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.738 qpair failed and we were unable to recover it. 00:29:57.738 [2024-11-20 11:30:50.228024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.738 [2024-11-20 11:30:50.228054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.738 qpair failed and we were unable to recover it. 00:29:57.738 [2024-11-20 11:30:50.228311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.738 [2024-11-20 11:30:50.228343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.738 qpair failed and we were unable to recover it. 00:29:57.738 [2024-11-20 11:30:50.228690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.228719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.229071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.229101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.229453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.229485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.229678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.229707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.229933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.229962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.230335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.230366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.230591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.230619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.230971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.231000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.231378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.231410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.231689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.231719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.232053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.232082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.232443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.232481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.232819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.232849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.233054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.233083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.233436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.233466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.233811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.233841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.234211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.234244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.234484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.234513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.234848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.234878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.235264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.235295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.235520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.235548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.235796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.235828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.236144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.236188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.236547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.236576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.236922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.236952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.237305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.237336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.237728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.237757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.238093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.238122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.238495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.238527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.238872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.238901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.239110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.239139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.239479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.239509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.239908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.239937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.240294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.240326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.240649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.240679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.241018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.241047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.241262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.241292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.241649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.739 [2024-11-20 11:30:50.241678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.739 qpair failed and we were unable to recover it. 00:29:57.739 [2024-11-20 11:30:50.242041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.242077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.242436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.242466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.242684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.242713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.243017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.243048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.243289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.243321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.243621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.243651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.243847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.243877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.244227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.244258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.244481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.244509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.244860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.244889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.245237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.245268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.245612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.245643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.245988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.246018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.246354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.246386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.246728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.246757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.247109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.247138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.247517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.247548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.247886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.247915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.248177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.248208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.248567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.248596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.248936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.248966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.249181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.249215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.249569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.249599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.249846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.249875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.250218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.250249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.250618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.250648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.250981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.251012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.251341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.251371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.251706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.251737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.252085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.252114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.252450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.252481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.252827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.252859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.253211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.253242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.253599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.253629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.253823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.253852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.254205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.254236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.254590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.254620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.254969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.254998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.740 [2024-11-20 11:30:50.255375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.740 [2024-11-20 11:30:50.255407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.740 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.255737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.255767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.255963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.255991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.256351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.256388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.256622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.256657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.256989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.257018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.257392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.257425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.257624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.257654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.258016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.258047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.258409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.258440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.258785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.258815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.259172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.259204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.259545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.259575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.259924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.259954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.260311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.260343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.260694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.260724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.261075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.261104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.261478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.261510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.261753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.261783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.262133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.262185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.262517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.262547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.262901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.262933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.263269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.263300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.263669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.263698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.264035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.264067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.264432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.264462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.264812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.264842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.265200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.265232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.265465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.265495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.265690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.265718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.266070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.266107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.266499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.266531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.266769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.266797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.267142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.267183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.267418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.267447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.267804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.267833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.268182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.268213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.268429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.268457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.268809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.741 [2024-11-20 11:30:50.268838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.741 qpair failed and we were unable to recover it. 00:29:57.741 [2024-11-20 11:30:50.269180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.269211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.269547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.269576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.269935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.269966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.270217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.270250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.270603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.270632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.270973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.271004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.271364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.271394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.271749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.271778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.272126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.272156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.272380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.272409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.272757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.272786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.273120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.273150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.273505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.273536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.273883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.273914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.274253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.274285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.274647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.274677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.275023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.275054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.275400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.275432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.275765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.275795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.276152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.276207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.276399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.276428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.276776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.276806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.277170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.277200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.277545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.277574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.277924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.277955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.278296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.278327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.278535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.278564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.278901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.278931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.279145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.279185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.279538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.279568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.279806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.279835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.280178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.280209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.280506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.280542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.280872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.280901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.281109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.281138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.281357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.281387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.281743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.281772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.282118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.282149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.742 qpair failed and we were unable to recover it. 00:29:57.742 [2024-11-20 11:30:50.282394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.742 [2024-11-20 11:30:50.282426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.743 qpair failed and we were unable to recover it. 00:29:57.743 [2024-11-20 11:30:50.282667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.743 [2024-11-20 11:30:50.282696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.743 qpair failed and we were unable to recover it. 00:29:57.743 [2024-11-20 11:30:50.283048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.743 [2024-11-20 11:30:50.283078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.743 qpair failed and we were unable to recover it. 00:29:57.743 [2024-11-20 11:30:50.283415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.743 [2024-11-20 11:30:50.283448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.743 qpair failed and we were unable to recover it. 00:29:57.743 [2024-11-20 11:30:50.283731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.743 [2024-11-20 11:30:50.283761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.743 qpair failed and we were unable to recover it. 00:29:57.743 [2024-11-20 11:30:50.284025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.743 [2024-11-20 11:30:50.284053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.743 qpair failed and we were unable to recover it. 00:29:57.743 [2024-11-20 11:30:50.284400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.743 [2024-11-20 11:30:50.284430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.743 qpair failed and we were unable to recover it. 00:29:57.743 [2024-11-20 11:30:50.284790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.743 [2024-11-20 11:30:50.284820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.743 qpair failed and we were unable to recover it. 00:29:57.743 [2024-11-20 11:30:50.285165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.743 [2024-11-20 11:30:50.285196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.743 qpair failed and we were unable to recover it. 00:29:57.743 [2024-11-20 11:30:50.285538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.743 [2024-11-20 11:30:50.285568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.743 qpair failed and we were unable to recover it. 00:29:57.743 [2024-11-20 11:30:50.285786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.743 [2024-11-20 11:30:50.285815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.743 qpair failed and we were unable to recover it. 00:29:57.743 [2024-11-20 11:30:50.286118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.743 [2024-11-20 11:30:50.286148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.743 qpair failed and we were unable to recover it. 00:29:57.743 [2024-11-20 11:30:50.286508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.743 [2024-11-20 11:30:50.286539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.743 qpair failed and we were unable to recover it. 00:29:57.743 [2024-11-20 11:30:50.286633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.743 [2024-11-20 11:30:50.286660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.743 qpair failed and we were unable to recover it. 00:29:57.743 [2024-11-20 11:30:50.287020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.743 [2024-11-20 11:30:50.287049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.743 qpair failed and we were unable to recover it. 00:29:57.743 [2024-11-20 11:30:50.287412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.743 [2024-11-20 11:30:50.287442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.743 qpair failed and we were unable to recover it. 00:29:57.743 [2024-11-20 11:30:50.287789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.743 [2024-11-20 11:30:50.287819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.743 qpair failed and we were unable to recover it. 00:29:57.743 [2024-11-20 11:30:50.288026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.743 [2024-11-20 11:30:50.288055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.743 qpair failed and we were unable to recover it. 00:29:57.743 [2024-11-20 11:30:50.288265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.743 [2024-11-20 11:30:50.288294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.743 qpair failed and we were unable to recover it. 00:29:57.743 [2024-11-20 11:30:50.288643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.743 [2024-11-20 11:30:50.288672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.743 qpair failed and we were unable to recover it. 00:29:57.743 [2024-11-20 11:30:50.289036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.743 [2024-11-20 11:30:50.289066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.743 qpair failed and we were unable to recover it. 00:29:57.743 [2024-11-20 11:30:50.289388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.743 [2024-11-20 11:30:50.289425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.743 qpair failed and we were unable to recover it. 00:29:57.743 [2024-11-20 11:30:50.289767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.743 [2024-11-20 11:30:50.289796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.743 qpair failed and we were unable to recover it. 00:29:57.743 [2024-11-20 11:30:50.290171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.743 [2024-11-20 11:30:50.290201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.743 qpair failed and we were unable to recover it. 00:29:57.743 [2024-11-20 11:30:50.290555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.743 [2024-11-20 11:30:50.290584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.743 qpair failed and we were unable to recover it. 00:29:57.743 [2024-11-20 11:30:50.290954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.743 [2024-11-20 11:30:50.290982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.743 qpair failed and we were unable to recover it. 00:29:57.743 [2024-11-20 11:30:50.291329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.743 [2024-11-20 11:30:50.291361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.743 qpair failed and we were unable to recover it. 00:29:57.743 [2024-11-20 11:30:50.291482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.743 [2024-11-20 11:30:50.291510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.743 qpair failed and we were unable to recover it. 00:29:57.743 [2024-11-20 11:30:50.291719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.743 [2024-11-20 11:30:50.291747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.743 qpair failed and we were unable to recover it. 00:29:57.743 [2024-11-20 11:30:50.292112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.743 [2024-11-20 11:30:50.292140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.743 qpair failed and we were unable to recover it. 00:29:57.743 [2024-11-20 11:30:50.292525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.743 [2024-11-20 11:30:50.292556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.743 qpair failed and we were unable to recover it. 00:29:57.743 [2024-11-20 11:30:50.292885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.743 [2024-11-20 11:30:50.292915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.743 qpair failed and we were unable to recover it. 00:29:57.743 [2024-11-20 11:30:50.293266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.743 [2024-11-20 11:30:50.293295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.743 qpair failed and we were unable to recover it. 00:29:57.743 [2024-11-20 11:30:50.293522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.743 [2024-11-20 11:30:50.293551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.743 qpair failed and we were unable to recover it. 00:29:57.743 [2024-11-20 11:30:50.293896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.293924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.294272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.294303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.294706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.294735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.294968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.294996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.295333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.295363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.295703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.295732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.296101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.296132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.296522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.296552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.296910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.296940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.297306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.297337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.297699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.297729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.298077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.298106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.298468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.298500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.298897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.298925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.299179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.299213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.299468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.299497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.299846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.299875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.300114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.300143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.300494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.300524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.300886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.300916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.301138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.301178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.301533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.301563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.301770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.301800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.302042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.302071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.302442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.302472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.302840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.302869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.303227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.303256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.303656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.303686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.304032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.304068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.304396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.304423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.304786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.304813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.305184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.305212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.305514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.305540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.305902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.305928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.306286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.306315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.306729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.306755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.307111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.307139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.307522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.307551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.744 [2024-11-20 11:30:50.307763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.744 [2024-11-20 11:30:50.307794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.744 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.308148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.308185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.308542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.308571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.308904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.308932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.309082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.309111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.309323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.309355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.309535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.309564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.309911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.309941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.310293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.310325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.310672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.310701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.311053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.311082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.311444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.311475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.311725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.311755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.311936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.311966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.312183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.312214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.312516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.312546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.312906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.312934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.313271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.313302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.313675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.313705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.314039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.314068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.314417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.314448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.314703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.314733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.314963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.314992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.315342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.315373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.315708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.315739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.316058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.316088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.316418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.316449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.316820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.316850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.317088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.317117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.317487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.317518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.317847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.317877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.318195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.318227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.318587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.318616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.318971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.319000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.319296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.319326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.319542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.319570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.319869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.319899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.320104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.320133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.745 qpair failed and we were unable to recover it. 00:29:57.745 [2024-11-20 11:30:50.320463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.745 [2024-11-20 11:30:50.320493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.320728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.320757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.320998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.321029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.321393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.321423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.321655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.321683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.321924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.321956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.322303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.322333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.322682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.322711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.323076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.323105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.323350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.323379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.323714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.323743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.324095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.324125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.324495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.324527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.324872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.324901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.325253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.325284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.325615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.325642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.325868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.325896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.326254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.326286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.326644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.326672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.326920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.326949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.327299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.327335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.327540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.327568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.327912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.327941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.328290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.328321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.328602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.328630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.328987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.329016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.329255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.329284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.329522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.329551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.329919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.329948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.330180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.330209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.330467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.330495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.330829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.330858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.331235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.331266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.331543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.331572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.331925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.331955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.332331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.332362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.332717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.332745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.333103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.333131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.333371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.746 [2024-11-20 11:30:50.333401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.746 qpair failed and we were unable to recover it. 00:29:57.746 [2024-11-20 11:30:50.333497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.747 [2024-11-20 11:30:50.333524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.747 qpair failed and we were unable to recover it. 00:29:57.747 [2024-11-20 11:30:50.333886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.747 [2024-11-20 11:30:50.333914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.747 qpair failed and we were unable to recover it. 00:29:57.747 [2024-11-20 11:30:50.334120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.747 [2024-11-20 11:30:50.334149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.747 qpair failed and we were unable to recover it. 00:29:57.747 [2024-11-20 11:30:50.334511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.747 [2024-11-20 11:30:50.334540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.747 qpair failed and we were unable to recover it. 00:29:57.747 [2024-11-20 11:30:50.334882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.747 [2024-11-20 11:30:50.334912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.747 qpair failed and we were unable to recover it. 00:29:57.747 [2024-11-20 11:30:50.335134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.747 [2024-11-20 11:30:50.335172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.747 qpair failed and we were unable to recover it. 00:29:57.747 [2024-11-20 11:30:50.335489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.747 [2024-11-20 11:30:50.335520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.747 qpair failed and we were unable to recover it. 00:29:57.747 [2024-11-20 11:30:50.335825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.747 [2024-11-20 11:30:50.335854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.747 qpair failed and we were unable to recover it. 00:29:57.747 [2024-11-20 11:30:50.336197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.747 [2024-11-20 11:30:50.336228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.747 qpair failed and we were unable to recover it. 00:29:57.747 [2024-11-20 11:30:50.336582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.747 [2024-11-20 11:30:50.336611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.747 qpair failed and we were unable to recover it. 00:29:57.747 [2024-11-20 11:30:50.336837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.747 [2024-11-20 11:30:50.336867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.747 qpair failed and we were unable to recover it. 00:29:57.747 [2024-11-20 11:30:50.337191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.747 [2024-11-20 11:30:50.337223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.747 qpair failed and we were unable to recover it. 00:29:57.747 [2024-11-20 11:30:50.337455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.747 [2024-11-20 11:30:50.337484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.747 qpair failed and we were unable to recover it. 00:29:57.747 [2024-11-20 11:30:50.337626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.747 [2024-11-20 11:30:50.337655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.747 qpair failed and we were unable to recover it. 00:29:57.747 [2024-11-20 11:30:50.337921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.747 [2024-11-20 11:30:50.337951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.747 qpair failed and we were unable to recover it. 00:29:57.747 [2024-11-20 11:30:50.338265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.747 [2024-11-20 11:30:50.338295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.747 qpair failed and we were unable to recover it. 00:29:57.747 [2024-11-20 11:30:50.338520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.747 [2024-11-20 11:30:50.338548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.747 qpair failed and we were unable to recover it. 00:29:57.747 [2024-11-20 11:30:50.338905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.747 [2024-11-20 11:30:50.338933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.747 qpair failed and we were unable to recover it. 00:29:57.747 [2024-11-20 11:30:50.339283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.747 [2024-11-20 11:30:50.339314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.747 qpair failed and we were unable to recover it. 00:29:57.747 [2024-11-20 11:30:50.339684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.747 [2024-11-20 11:30:50.339713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.747 qpair failed and we were unable to recover it. 00:29:57.747 [2024-11-20 11:30:50.340046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.747 [2024-11-20 11:30:50.340075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.747 qpair failed and we were unable to recover it. 00:29:57.747 [2024-11-20 11:30:50.340411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.747 [2024-11-20 11:30:50.340441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.747 qpair failed and we were unable to recover it. 00:29:57.747 [2024-11-20 11:30:50.340809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.747 [2024-11-20 11:30:50.340839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.747 qpair failed and we were unable to recover it. 00:29:57.747 [2024-11-20 11:30:50.341085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.747 [2024-11-20 11:30:50.341113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.747 qpair failed and we were unable to recover it. 00:29:57.747 [2024-11-20 11:30:50.341334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.747 [2024-11-20 11:30:50.341363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.747 qpair failed and we were unable to recover it. 00:29:57.747 [2024-11-20 11:30:50.341592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.747 [2024-11-20 11:30:50.341620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.747 qpair failed and we were unable to recover it. 00:29:57.747 [2024-11-20 11:30:50.341850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.747 [2024-11-20 11:30:50.341880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.747 qpair failed and we were unable to recover it. 00:29:57.747 [2024-11-20 11:30:50.342218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.747 [2024-11-20 11:30:50.342248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.747 qpair failed and we were unable to recover it. 00:29:57.747 [2024-11-20 11:30:50.342610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.747 [2024-11-20 11:30:50.342639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.747 qpair failed and we were unable to recover it. 00:29:57.747 [2024-11-20 11:30:50.342977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.747 [2024-11-20 11:30:50.343007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.747 qpair failed and we were unable to recover it. 00:29:57.747 [2024-11-20 11:30:50.343358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.747 [2024-11-20 11:30:50.343388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.747 qpair failed and we were unable to recover it. 00:29:57.747 [2024-11-20 11:30:50.343593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.747 [2024-11-20 11:30:50.343626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.747 qpair failed and we were unable to recover it. 00:29:57.747 [2024-11-20 11:30:50.343953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.747 [2024-11-20 11:30:50.343982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.747 qpair failed and we were unable to recover it. 00:29:57.747 [2024-11-20 11:30:50.344329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.747 [2024-11-20 11:30:50.344361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.747 qpair failed and we were unable to recover it. 00:29:57.747 [2024-11-20 11:30:50.344766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.344795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.345149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.345187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.345548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.345577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.345817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.345849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.346204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.346235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.346574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.346603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.346800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.346827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.347170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.347201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.347410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.347438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.347672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.347701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.348062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.348091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.348303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.348333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.348678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.348706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.348904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.348931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.349288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.349318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.349696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.349731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.350074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.350103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.350467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.350498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.350862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.350890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.351250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.351281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.351613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.351641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.351982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.352010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.352239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.352269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.352616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.352644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.352865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.352893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.353128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.353157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.353610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.353639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.353976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.354006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.354261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.354291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.354588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.354619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.354960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.354989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.355204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.355234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.355428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.355456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.355797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.355825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.356184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.356215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.356545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.356575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.356826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.356855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.357092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.748 [2024-11-20 11:30:50.357124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.748 qpair failed and we were unable to recover it. 00:29:57.748 [2024-11-20 11:30:50.357462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.357493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.357825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.357854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.358207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.358237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.358606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.358634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.358975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.359005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.359378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.359409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.359742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.359772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.360114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.360142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.360512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.360543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.360872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.360902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.361235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.361286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.361631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.361661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.362007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.362037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.362394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.362424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.362634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.362663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.362904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.362932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.363167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.363198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.363552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.363581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.363793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.363830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.364140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.364178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.364552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.364581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.364938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.364967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.365321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.365352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.365712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.365741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.366096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.366124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.366363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.366393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.366778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.366807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.367144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.367194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.367556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.367585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.367834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.367865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.368213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.368244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.368586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.368616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.368961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.368990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.369336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.369367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.369725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.369754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.370084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.370114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.370290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.370321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.370677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.370706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.749 qpair failed and we were unable to recover it. 00:29:57.749 [2024-11-20 11:30:50.370929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.749 [2024-11-20 11:30:50.370957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.371195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-11-20 11:30:50.371227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.371618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-11-20 11:30:50.371647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.371861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-11-20 11:30:50.371890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.372230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-11-20 11:30:50.372261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.372607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-11-20 11:30:50.372636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.372986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-11-20 11:30:50.373016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.373286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-11-20 11:30:50.373324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.373707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-11-20 11:30:50.373737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.374115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-11-20 11:30:50.374144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.374500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-11-20 11:30:50.374529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.374753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-11-20 11:30:50.374781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.375115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-11-20 11:30:50.375144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.375394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-11-20 11:30:50.375424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.375724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-11-20 11:30:50.375753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.376119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-11-20 11:30:50.376147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.376403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-11-20 11:30:50.376432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.376743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-11-20 11:30:50.376772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.377131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-11-20 11:30:50.377169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.377375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-11-20 11:30:50.377404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.377739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-11-20 11:30:50.377768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.378107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-11-20 11:30:50.378137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.378500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-11-20 11:30:50.378531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.378745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-11-20 11:30:50.378773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.379120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-11-20 11:30:50.379150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.379512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-11-20 11:30:50.379542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.379869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-11-20 11:30:50.379899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.380262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-11-20 11:30:50.380294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.380649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-11-20 11:30:50.380679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.381009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-11-20 11:30:50.381039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.381414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-11-20 11:30:50.381444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.381675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-11-20 11:30:50.381704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.382056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-11-20 11:30:50.382085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.382433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-11-20 11:30:50.382463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.382806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-11-20 11:30:50.382836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.383193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-11-20 11:30:50.383223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.383566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.750 [2024-11-20 11:30:50.383596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.750 qpair failed and we were unable to recover it. 00:29:57.750 [2024-11-20 11:30:50.383927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.383956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.384305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.384335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.384678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.384708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.385058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.385087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.385432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.385461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.385825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.385854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.386185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.386216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.386564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.386592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.386786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.386814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.387176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.387207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.387408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.387436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.387783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.387822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.388146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.388186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.388531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.388559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.388912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.388942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.389293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.389323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.389652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.389682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.389896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.389927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.390252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.390282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.390492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.390520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.390881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.390910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.391275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.391306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.391665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.391695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.392032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.392061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.392424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.392454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.392675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.392703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.393048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.393076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.393395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.393427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.393772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.393801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.394004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.394032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.394408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.394438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.394785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.394815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.395164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.395194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.395541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.395573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.395903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.395933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.396287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.396317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.396663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.396692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.397037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.397066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.751 qpair failed and we were unable to recover it. 00:29:57.751 [2024-11-20 11:30:50.397281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.751 [2024-11-20 11:30:50.397317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-20 11:30:50.397616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-20 11:30:50.397645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-20 11:30:50.398001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-20 11:30:50.398030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-20 11:30:50.398394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-20 11:30:50.398424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-20 11:30:50.398759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-20 11:30:50.398788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-20 11:30:50.399131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-20 11:30:50.399168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-20 11:30:50.399314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-20 11:30:50.399343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-20 11:30:50.399687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-20 11:30:50.399716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-20 11:30:50.399918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-20 11:30:50.399946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-20 11:30:50.400301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-20 11:30:50.400332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-20 11:30:50.400674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-20 11:30:50.400702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-20 11:30:50.401048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-20 11:30:50.401076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-20 11:30:50.401437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-20 11:30:50.401468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-20 11:30:50.401829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-20 11:30:50.401858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-20 11:30:50.402193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-20 11:30:50.402224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-20 11:30:50.402433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-20 11:30:50.402461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-20 11:30:50.402704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-20 11:30:50.402733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-20 11:30:50.402960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-20 11:30:50.402990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-20 11:30:50.403384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-20 11:30:50.403415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-20 11:30:50.403765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-20 11:30:50.403794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-20 11:30:50.404166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-20 11:30:50.404197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-20 11:30:50.404539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-20 11:30:50.404568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-20 11:30:50.404899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-20 11:30:50.404928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-20 11:30:50.405279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-20 11:30:50.405310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-20 11:30:50.405725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-20 11:30:50.405754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-20 11:30:50.406094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-20 11:30:50.406123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-20 11:30:50.406503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-20 11:30:50.406533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-20 11:30:50.406891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-20 11:30:50.406920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-20 11:30:50.407320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-20 11:30:50.407351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-20 11:30:50.407711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-20 11:30:50.407739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-20 11:30:50.408093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-20 11:30:50.408122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-20 11:30:50.408495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-20 11:30:50.408525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-20 11:30:50.408870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-20 11:30:50.408900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.752 [2024-11-20 11:30:50.409244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.752 [2024-11-20 11:30:50.409275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.752 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.409638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.409667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.409885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.409912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.410241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.410270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.410613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.410642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.410998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.411027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.411379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.411409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.411502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.411529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.411870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.411905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.412106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.412133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.412515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.412544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.412894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.412923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.413127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.413155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.413500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.413528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.413879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.413910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.414104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.414134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.414423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.414452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.414832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.414861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.415194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.415226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.415572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.415602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.415941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.415970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.416193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.416221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.416586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.416616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.416948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.416977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.417185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.417213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.417557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.417585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.417940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.417969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.418202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.418231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.418471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.418499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.418831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.418860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.419076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.419108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.419526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.419558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.419889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.419918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.420146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.420184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.420507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.420536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.420887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.420917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.421247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.421278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.421612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.421643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.421978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.753 [2024-11-20 11:30:50.422008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.753 qpair failed and we were unable to recover it. 00:29:57.753 [2024-11-20 11:30:50.422328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.422357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.422701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.422730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.423060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.423090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.423312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.423341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.423667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.423695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.424055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.424084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.424466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.424496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.424744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.424772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.425118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.425146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.425498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.425528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.425733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.425762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.426087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.426115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.426354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.426384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.426729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.426758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.427010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.427037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.427405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.427435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.427793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.427823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.428171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.428201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.428505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.428533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.428741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.428770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.429122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.429151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.429509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.429540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.429758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.429789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.430182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.430213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.430546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.430575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.430913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.430942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.431147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.431197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.431559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.431588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.431940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.431969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.432318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.432349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.432691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.432721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.433066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.433095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.433473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.433503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.433838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.433867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.434216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.434246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.434351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.434381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.434722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.434751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.435084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.435119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.754 [2024-11-20 11:30:50.435465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.754 [2024-11-20 11:30:50.435495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.754 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.435700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.435729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.436062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.436090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.436454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.436484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.436879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.436909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.437214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.437244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.437575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.437604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.437969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.437997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.438331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.438361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.438616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.438645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.439012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.439040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.439401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.439430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.439795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.439824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.440176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.440209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.440564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.440593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.440938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.440967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.441183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.441213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.441532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.441560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.441925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.441954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.442190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.442220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.442576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.442605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.442948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.442977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.443335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.443366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.443584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.443613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.443963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.443992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.444312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.444344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.444680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.444709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.445070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.445100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.445317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.445348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.445711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.445740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.446091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.446119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.446492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.446523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.446864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.446894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.447226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.447256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.447638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.447667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.448033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.448062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.448450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.448481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.448678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.448706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.755 qpair failed and we were unable to recover it. 00:29:57.755 [2024-11-20 11:30:50.448957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.755 [2024-11-20 11:30:50.448989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-20 11:30:50.449327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-20 11:30:50.449358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-20 11:30:50.449559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-20 11:30:50.449587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-20 11:30:50.449952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-20 11:30:50.449981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-20 11:30:50.450321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-20 11:30:50.450352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-20 11:30:50.450696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-20 11:30:50.450725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-20 11:30:50.451070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-20 11:30:50.451101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-20 11:30:50.451334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-20 11:30:50.451364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-20 11:30:50.451724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-20 11:30:50.451753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-20 11:30:50.452094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-20 11:30:50.452123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-20 11:30:50.452334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-20 11:30:50.452363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-20 11:30:50.452713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-20 11:30:50.452741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-20 11:30:50.453104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-20 11:30:50.453133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-20 11:30:50.453510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-20 11:30:50.453540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-20 11:30:50.453880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-20 11:30:50.453910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-20 11:30:50.454263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-20 11:30:50.454294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-20 11:30:50.454676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-20 11:30:50.454706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-20 11:30:50.454986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-20 11:30:50.455015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-20 11:30:50.455375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-20 11:30:50.455404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-20 11:30:50.455607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-20 11:30:50.455635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-20 11:30:50.455837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-20 11:30:50.455865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-20 11:30:50.456200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-20 11:30:50.456231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-20 11:30:50.456464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-20 11:30:50.456492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-20 11:30:50.456825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-20 11:30:50.456855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:57.756 [2024-11-20 11:30:50.457219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.756 [2024-11-20 11:30:50.457250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:57.756 qpair failed and we were unable to recover it. 00:29:58.035 [2024-11-20 11:30:50.457632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.035 [2024-11-20 11:30:50.457663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.035 qpair failed and we were unable to recover it. 00:29:58.035 [2024-11-20 11:30:50.458061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.035 [2024-11-20 11:30:50.458091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.035 qpair failed and we were unable to recover it. 00:29:58.035 [2024-11-20 11:30:50.458460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.035 [2024-11-20 11:30:50.458490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.035 qpair failed and we were unable to recover it. 00:29:58.035 [2024-11-20 11:30:50.458838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.035 [2024-11-20 11:30:50.458867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.035 qpair failed and we were unable to recover it. 00:29:58.035 [2024-11-20 11:30:50.459250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.035 [2024-11-20 11:30:50.459350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.035 qpair failed and we were unable to recover it. 00:29:58.035 [2024-11-20 11:30:50.459568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.035 [2024-11-20 11:30:50.459600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.035 qpair failed and we were unable to recover it. 00:29:58.035 [2024-11-20 11:30:50.459932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.035 [2024-11-20 11:30:50.459961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.035 qpair failed and we were unable to recover it. 00:29:58.035 [2024-11-20 11:30:50.460314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.035 [2024-11-20 11:30:50.460344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.035 qpair failed and we were unable to recover it. 00:29:58.035 [2024-11-20 11:30:50.460704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.035 [2024-11-20 11:30:50.460732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.035 qpair failed and we were unable to recover it. 00:29:58.035 [2024-11-20 11:30:50.461087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.035 [2024-11-20 11:30:50.461116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.035 qpair failed and we were unable to recover it. 00:29:58.035 [2024-11-20 11:30:50.461337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.035 [2024-11-20 11:30:50.461369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.035 qpair failed and we were unable to recover it. 00:29:58.035 [2024-11-20 11:30:50.461678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.035 [2024-11-20 11:30:50.461708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.035 qpair failed and we were unable to recover it. 00:29:58.035 [2024-11-20 11:30:50.461921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.035 [2024-11-20 11:30:50.461953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.035 qpair failed and we were unable to recover it. 00:29:58.035 [2024-11-20 11:30:50.462310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.035 [2024-11-20 11:30:50.462340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.035 qpair failed and we were unable to recover it. 00:29:58.035 [2024-11-20 11:30:50.462674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.035 [2024-11-20 11:30:50.462704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.035 qpair failed and we were unable to recover it. 00:29:58.035 [2024-11-20 11:30:50.463005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.035 [2024-11-20 11:30:50.463034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.035 qpair failed and we were unable to recover it. 00:29:58.035 [2024-11-20 11:30:50.463376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.035 [2024-11-20 11:30:50.463405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.035 qpair failed and we were unable to recover it. 00:29:58.035 [2024-11-20 11:30:50.463768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.035 [2024-11-20 11:30:50.463797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.035 qpair failed and we were unable to recover it. 00:29:58.035 [2024-11-20 11:30:50.464154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.035 [2024-11-20 11:30:50.464202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.036 qpair failed and we were unable to recover it. 00:29:58.036 [2024-11-20 11:30:50.464566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.036 [2024-11-20 11:30:50.464596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.036 qpair failed and we were unable to recover it. 00:29:58.036 [2024-11-20 11:30:50.464795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.036 [2024-11-20 11:30:50.464823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.036 qpair failed and we were unable to recover it. 00:29:58.036 [2024-11-20 11:30:50.465053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.036 [2024-11-20 11:30:50.465082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.036 qpair failed and we were unable to recover it. 00:29:58.036 [2024-11-20 11:30:50.465431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.036 [2024-11-20 11:30:50.465462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.036 qpair failed and we were unable to recover it. 00:29:58.036 [2024-11-20 11:30:50.465802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.036 [2024-11-20 11:30:50.465830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.036 qpair failed and we were unable to recover it. 00:29:58.036 [2024-11-20 11:30:50.466176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.036 [2024-11-20 11:30:50.466207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.036 qpair failed and we were unable to recover it. 00:29:58.036 [2024-11-20 11:30:50.466557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.036 [2024-11-20 11:30:50.466586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.036 qpair failed and we were unable to recover it. 00:29:58.036 [2024-11-20 11:30:50.466805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.036 [2024-11-20 11:30:50.466833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.036 qpair failed and we were unable to recover it. 00:29:58.036 [2024-11-20 11:30:50.467069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.036 [2024-11-20 11:30:50.467097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.036 qpair failed and we were unable to recover it. 00:29:58.036 [2024-11-20 11:30:50.467439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.036 [2024-11-20 11:30:50.467468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.036 qpair failed and we were unable to recover it. 00:29:58.036 [2024-11-20 11:30:50.467796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.036 [2024-11-20 11:30:50.467823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.036 qpair failed and we were unable to recover it. 00:29:58.036 [2024-11-20 11:30:50.468124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.036 [2024-11-20 11:30:50.468153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.036 qpair failed and we were unable to recover it. 00:29:58.036 [2024-11-20 11:30:50.468252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.036 [2024-11-20 11:30:50.468280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.036 qpair failed and we were unable to recover it. 00:29:58.036 [2024-11-20 11:30:50.468601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.036 [2024-11-20 11:30:50.468629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.036 qpair failed and we were unable to recover it. 00:29:58.036 [2024-11-20 11:30:50.468986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.036 [2024-11-20 11:30:50.469016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.036 qpair failed and we were unable to recover it. 00:29:58.036 [2024-11-20 11:30:50.469338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.036 [2024-11-20 11:30:50.469367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.036 qpair failed and we were unable to recover it. 00:29:58.036 [2024-11-20 11:30:50.469714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.036 [2024-11-20 11:30:50.469742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.036 qpair failed and we were unable to recover it. 00:29:58.036 [2024-11-20 11:30:50.470097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.036 [2024-11-20 11:30:50.470126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.036 qpair failed and we were unable to recover it. 00:29:58.036 [2024-11-20 11:30:50.470503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.036 [2024-11-20 11:30:50.470533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.036 qpair failed and we were unable to recover it. 00:29:58.036 [2024-11-20 11:30:50.470827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.036 [2024-11-20 11:30:50.470856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.036 qpair failed and we were unable to recover it. 00:29:58.036 [2024-11-20 11:30:50.471191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.036 [2024-11-20 11:30:50.471221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.036 qpair failed and we were unable to recover it. 00:29:58.036 [2024-11-20 11:30:50.471565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.036 [2024-11-20 11:30:50.471593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.036 qpair failed and we were unable to recover it. 00:29:58.036 [2024-11-20 11:30:50.471847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.036 [2024-11-20 11:30:50.471877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.036 qpair failed and we were unable to recover it. 00:29:58.036 [2024-11-20 11:30:50.472217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.036 [2024-11-20 11:30:50.472248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.036 qpair failed and we were unable to recover it. 00:29:58.036 [2024-11-20 11:30:50.472634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.036 [2024-11-20 11:30:50.472664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.036 qpair failed and we were unable to recover it. 00:29:58.036 [2024-11-20 11:30:50.472855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.036 [2024-11-20 11:30:50.472882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.036 qpair failed and we were unable to recover it. 00:29:58.036 [2024-11-20 11:30:50.473238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.036 [2024-11-20 11:30:50.473273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.036 qpair failed and we were unable to recover it. 00:29:58.036 [2024-11-20 11:30:50.473598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.036 [2024-11-20 11:30:50.473626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.036 qpair failed and we were unable to recover it. 00:29:58.036 [2024-11-20 11:30:50.473968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.036 [2024-11-20 11:30:50.473997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.036 qpair failed and we were unable to recover it. 00:29:58.036 [2024-11-20 11:30:50.474219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.036 [2024-11-20 11:30:50.474248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.036 qpair failed and we were unable to recover it. 00:29:58.037 [2024-11-20 11:30:50.474589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.037 [2024-11-20 11:30:50.474618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.037 qpair failed and we were unable to recover it. 00:29:58.037 [2024-11-20 11:30:50.474962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.037 [2024-11-20 11:30:50.474992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.037 qpair failed and we were unable to recover it. 00:29:58.037 [2024-11-20 11:30:50.475174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.037 [2024-11-20 11:30:50.475204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.037 qpair failed and we were unable to recover it. 00:29:58.037 [2024-11-20 11:30:50.475540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.037 [2024-11-20 11:30:50.475568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.037 qpair failed and we were unable to recover it. 00:29:58.037 [2024-11-20 11:30:50.475778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.037 [2024-11-20 11:30:50.475805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.037 qpair failed and we were unable to recover it. 00:29:58.037 [2024-11-20 11:30:50.476016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.037 [2024-11-20 11:30:50.476046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.037 qpair failed and we were unable to recover it. 00:29:58.037 [2024-11-20 11:30:50.476395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.037 [2024-11-20 11:30:50.476426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.037 qpair failed and we were unable to recover it. 00:29:58.037 [2024-11-20 11:30:50.476778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.037 [2024-11-20 11:30:50.476807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.037 qpair failed and we were unable to recover it. 00:29:58.037 [2024-11-20 11:30:50.477195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.037 [2024-11-20 11:30:50.477227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.037 qpair failed and we were unable to recover it. 00:29:58.037 [2024-11-20 11:30:50.477549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.037 [2024-11-20 11:30:50.477578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.037 qpair failed and we were unable to recover it. 00:29:58.037 [2024-11-20 11:30:50.477921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.037 [2024-11-20 11:30:50.477949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.037 qpair failed and we were unable to recover it. 00:29:58.037 [2024-11-20 11:30:50.478177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.037 [2024-11-20 11:30:50.478206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.037 qpair failed and we were unable to recover it. 00:29:58.037 [2024-11-20 11:30:50.478548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.037 [2024-11-20 11:30:50.478578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.037 qpair failed and we were unable to recover it. 00:29:58.037 [2024-11-20 11:30:50.478790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.037 [2024-11-20 11:30:50.478818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.037 qpair failed and we were unable to recover it. 00:29:58.037 [2024-11-20 11:30:50.479169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.037 [2024-11-20 11:30:50.479200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.037 qpair failed and we were unable to recover it. 00:29:58.037 [2024-11-20 11:30:50.479541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.037 [2024-11-20 11:30:50.479571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.037 qpair failed and we were unable to recover it. 00:29:58.037 [2024-11-20 11:30:50.479776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.037 [2024-11-20 11:30:50.479804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.037 qpair failed and we were unable to recover it. 00:29:58.037 [2024-11-20 11:30:50.480152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.037 [2024-11-20 11:30:50.480192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.037 qpair failed and we were unable to recover it. 00:29:58.037 [2024-11-20 11:30:50.480494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.037 [2024-11-20 11:30:50.480524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.037 qpair failed and we were unable to recover it. 00:29:58.037 [2024-11-20 11:30:50.480865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.037 [2024-11-20 11:30:50.480894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.037 qpair failed and we were unable to recover it. 00:29:58.037 [2024-11-20 11:30:50.481239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.037 [2024-11-20 11:30:50.481269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.037 qpair failed and we were unable to recover it. 00:29:58.037 [2024-11-20 11:30:50.481624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.037 [2024-11-20 11:30:50.481653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.037 qpair failed and we were unable to recover it. 00:29:58.037 [2024-11-20 11:30:50.481809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.037 [2024-11-20 11:30:50.481836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.037 qpair failed and we were unable to recover it. 00:29:58.037 [2024-11-20 11:30:50.482182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.037 [2024-11-20 11:30:50.482218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.037 qpair failed and we were unable to recover it. 00:29:58.037 [2024-11-20 11:30:50.482544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.037 [2024-11-20 11:30:50.482574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.037 qpair failed and we were unable to recover it. 00:29:58.037 [2024-11-20 11:30:50.482917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.037 [2024-11-20 11:30:50.482945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.037 qpair failed and we were unable to recover it. 00:29:58.037 [2024-11-20 11:30:50.483293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.037 [2024-11-20 11:30:50.483323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.037 qpair failed and we were unable to recover it. 00:29:58.037 [2024-11-20 11:30:50.483684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.037 [2024-11-20 11:30:50.483712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.037 qpair failed and we were unable to recover it. 00:29:58.037 [2024-11-20 11:30:50.484064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.037 [2024-11-20 11:30:50.484092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.037 qpair failed and we were unable to recover it. 00:29:58.037 [2024-11-20 11:30:50.484444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.037 [2024-11-20 11:30:50.484476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.037 qpair failed and we were unable to recover it. 00:29:58.037 [2024-11-20 11:30:50.484680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.037 [2024-11-20 11:30:50.484709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.037 qpair failed and we were unable to recover it. 00:29:58.037 [2024-11-20 11:30:50.485095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.037 [2024-11-20 11:30:50.485123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.037 qpair failed and we were unable to recover it. 00:29:58.037 [2024-11-20 11:30:50.485470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.037 [2024-11-20 11:30:50.485499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.037 qpair failed and we were unable to recover it. 00:29:58.037 [2024-11-20 11:30:50.485693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.038 [2024-11-20 11:30:50.485722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.038 qpair failed and we were unable to recover it. 00:29:58.038 [2024-11-20 11:30:50.486047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.038 [2024-11-20 11:30:50.486076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.038 qpair failed and we were unable to recover it. 00:29:58.038 [2024-11-20 11:30:50.486408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.038 [2024-11-20 11:30:50.486439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.038 qpair failed and we were unable to recover it. 00:29:58.038 [2024-11-20 11:30:50.486768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.038 [2024-11-20 11:30:50.486798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.038 qpair failed and we were unable to recover it. 00:29:58.038 [2024-11-20 11:30:50.487141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.038 [2024-11-20 11:30:50.487180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.038 qpair failed and we were unable to recover it. 00:29:58.038 [2024-11-20 11:30:50.487480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.038 [2024-11-20 11:30:50.487510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.038 qpair failed and we were unable to recover it. 00:29:58.038 [2024-11-20 11:30:50.487842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.038 [2024-11-20 11:30:50.487871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.038 qpair failed and we were unable to recover it. 00:29:58.038 [2024-11-20 11:30:50.488218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.038 [2024-11-20 11:30:50.488247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.038 qpair failed and we were unable to recover it. 00:29:58.038 [2024-11-20 11:30:50.488577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.038 [2024-11-20 11:30:50.488607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.038 qpair failed and we were unable to recover it. 00:29:58.038 [2024-11-20 11:30:50.488954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.038 [2024-11-20 11:30:50.488983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.038 qpair failed and we were unable to recover it. 00:29:58.038 [2024-11-20 11:30:50.489326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.038 [2024-11-20 11:30:50.489356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.038 qpair failed and we were unable to recover it. 00:29:58.038 [2024-11-20 11:30:50.489711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.038 [2024-11-20 11:30:50.489739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.038 qpair failed and we were unable to recover it. 00:29:58.038 [2024-11-20 11:30:50.490084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.038 [2024-11-20 11:30:50.490112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.038 qpair failed and we were unable to recover it. 00:29:58.038 [2024-11-20 11:30:50.490477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.038 [2024-11-20 11:30:50.490509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.038 qpair failed and we were unable to recover it. 00:29:58.038 [2024-11-20 11:30:50.490710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.038 [2024-11-20 11:30:50.490739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.038 qpair failed and we were unable to recover it. 00:29:58.038 [2024-11-20 11:30:50.491037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.038 [2024-11-20 11:30:50.491068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.038 qpair failed and we were unable to recover it. 00:29:58.038 [2024-11-20 11:30:50.491274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.038 [2024-11-20 11:30:50.491305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.038 qpair failed and we were unable to recover it. 00:29:58.038 [2024-11-20 11:30:50.491602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.038 [2024-11-20 11:30:50.491631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.038 qpair failed and we were unable to recover it. 00:29:58.038 [2024-11-20 11:30:50.491973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.038 [2024-11-20 11:30:50.492002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.038 qpair failed and we were unable to recover it. 00:29:58.038 [2024-11-20 11:30:50.492241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.038 [2024-11-20 11:30:50.492273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.038 qpair failed and we were unable to recover it. 00:29:58.038 [2024-11-20 11:30:50.492638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.038 [2024-11-20 11:30:50.492667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.038 qpair failed and we were unable to recover it. 00:29:58.038 [2024-11-20 11:30:50.493008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.038 [2024-11-20 11:30:50.493037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.038 qpair failed and we were unable to recover it. 00:29:58.038 [2024-11-20 11:30:50.493395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.038 [2024-11-20 11:30:50.493426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.038 qpair failed and we were unable to recover it. 00:29:58.038 [2024-11-20 11:30:50.493762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.038 [2024-11-20 11:30:50.493791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.038 qpair failed and we were unable to recover it. 00:29:58.038 [2024-11-20 11:30:50.494149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.038 [2024-11-20 11:30:50.494188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.038 qpair failed and we were unable to recover it. 00:29:58.038 [2024-11-20 11:30:50.494491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.038 [2024-11-20 11:30:50.494520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.038 qpair failed and we were unable to recover it. 00:29:58.038 [2024-11-20 11:30:50.494880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.038 [2024-11-20 11:30:50.494909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.038 qpair failed and we were unable to recover it. 00:29:58.038 [2024-11-20 11:30:50.495271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.038 [2024-11-20 11:30:50.495301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.038 qpair failed and we were unable to recover it. 00:29:58.038 [2024-11-20 11:30:50.495665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.038 [2024-11-20 11:30:50.495694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.038 qpair failed and we were unable to recover it. 00:29:58.038 [2024-11-20 11:30:50.495910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.039 [2024-11-20 11:30:50.495938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.039 qpair failed and we were unable to recover it. 00:29:58.039 [2024-11-20 11:30:50.496267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.039 [2024-11-20 11:30:50.496297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.039 qpair failed and we were unable to recover it. 00:29:58.039 [2024-11-20 11:30:50.496502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.039 [2024-11-20 11:30:50.496536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.039 qpair failed and we were unable to recover it. 00:29:58.039 [2024-11-20 11:30:50.496773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.039 [2024-11-20 11:30:50.496802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.039 qpair failed and we were unable to recover it. 00:29:58.039 [2024-11-20 11:30:50.497049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.039 [2024-11-20 11:30:50.497081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.039 qpair failed and we were unable to recover it. 00:29:58.039 [2024-11-20 11:30:50.497177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.039 [2024-11-20 11:30:50.497205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.039 qpair failed and we were unable to recover it. 00:29:58.039 [2024-11-20 11:30:50.497531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.039 [2024-11-20 11:30:50.497561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.039 qpair failed and we were unable to recover it. 00:29:58.039 [2024-11-20 11:30:50.497899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.039 [2024-11-20 11:30:50.497929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.039 qpair failed and we were unable to recover it. 00:29:58.039 [2024-11-20 11:30:50.498293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.039 [2024-11-20 11:30:50.498324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.039 qpair failed and we were unable to recover it. 00:29:58.039 [2024-11-20 11:30:50.498558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.039 [2024-11-20 11:30:50.498586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.039 qpair failed and we were unable to recover it. 00:29:58.039 [2024-11-20 11:30:50.498892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.039 [2024-11-20 11:30:50.498921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.039 qpair failed and we were unable to recover it. 00:29:58.039 [2024-11-20 11:30:50.499271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.039 [2024-11-20 11:30:50.499303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.039 qpair failed and we were unable to recover it. 00:29:58.039 [2024-11-20 11:30:50.499498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.039 [2024-11-20 11:30:50.499526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.039 qpair failed and we were unable to recover it. 00:29:58.039 [2024-11-20 11:30:50.499805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.039 [2024-11-20 11:30:50.499833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.039 qpair failed and we were unable to recover it. 00:29:58.039 [2024-11-20 11:30:50.500176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.039 [2024-11-20 11:30:50.500206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.039 qpair failed and we were unable to recover it. 00:29:58.039 [2024-11-20 11:30:50.500543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.039 [2024-11-20 11:30:50.500572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.039 qpair failed and we were unable to recover it. 00:29:58.039 [2024-11-20 11:30:50.500914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.039 [2024-11-20 11:30:50.500944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.039 qpair failed and we were unable to recover it. 00:29:58.039 [2024-11-20 11:30:50.501276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.039 [2024-11-20 11:30:50.501307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.039 qpair failed and we were unable to recover it. 00:29:58.039 [2024-11-20 11:30:50.501511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.039 [2024-11-20 11:30:50.501539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.039 qpair failed and we were unable to recover it. 00:29:58.039 [2024-11-20 11:30:50.501762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.039 [2024-11-20 11:30:50.501793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.039 qpair failed and we were unable to recover it. 00:29:58.039 [2024-11-20 11:30:50.502007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.039 [2024-11-20 11:30:50.502039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.039 qpair failed and we were unable to recover it. 00:29:58.039 [2024-11-20 11:30:50.502390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.039 [2024-11-20 11:30:50.502422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.039 qpair failed and we were unable to recover it. 00:29:58.039 [2024-11-20 11:30:50.502750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.039 [2024-11-20 11:30:50.502780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.039 qpair failed and we were unable to recover it. 00:29:58.039 [2024-11-20 11:30:50.503108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.039 [2024-11-20 11:30:50.503136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.039 qpair failed and we were unable to recover it. 00:29:58.039 [2024-11-20 11:30:50.503506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.039 [2024-11-20 11:30:50.503536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.039 qpair failed and we were unable to recover it. 00:29:58.039 [2024-11-20 11:30:50.503880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.039 [2024-11-20 11:30:50.503909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.039 qpair failed and we were unable to recover it. 00:29:58.039 [2024-11-20 11:30:50.503997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.039 [2024-11-20 11:30:50.504024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.039 qpair failed and we were unable to recover it. 00:29:58.039 [2024-11-20 11:30:50.504522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.039 [2024-11-20 11:30:50.504620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.039 qpair failed and we were unable to recover it. 00:29:58.039 [2024-11-20 11:30:50.504982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.039 [2024-11-20 11:30:50.505019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.039 qpair failed and we were unable to recover it. 00:29:58.039 [2024-11-20 11:30:50.505428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.039 [2024-11-20 11:30:50.505529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.039 qpair failed and we were unable to recover it. 00:29:58.039 [2024-11-20 11:30:50.505903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.039 [2024-11-20 11:30:50.505941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.039 qpair failed and we were unable to recover it. 00:29:58.039 [2024-11-20 11:30:50.506359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.039 [2024-11-20 11:30:50.506450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.039 qpair failed and we were unable to recover it. 00:29:58.040 [2024-11-20 11:30:50.506706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.040 [2024-11-20 11:30:50.506743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.040 qpair failed and we were unable to recover it. 00:29:58.040 [2024-11-20 11:30:50.507073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.040 [2024-11-20 11:30:50.507104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.040 qpair failed and we were unable to recover it. 00:29:58.040 [2024-11-20 11:30:50.507333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.040 [2024-11-20 11:30:50.507368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.040 qpair failed and we were unable to recover it. 00:29:58.040 [2024-11-20 11:30:50.507743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.040 [2024-11-20 11:30:50.507773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.040 qpair failed and we were unable to recover it. 00:29:58.040 [2024-11-20 11:30:50.508103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.040 [2024-11-20 11:30:50.508133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.040 qpair failed and we were unable to recover it. 00:29:58.040 [2024-11-20 11:30:50.508500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.040 [2024-11-20 11:30:50.508532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.040 qpair failed and we were unable to recover it. 00:29:58.040 [2024-11-20 11:30:50.508855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.040 [2024-11-20 11:30:50.508885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.040 qpair failed and we were unable to recover it. 00:29:58.040 [2024-11-20 11:30:50.509228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.040 [2024-11-20 11:30:50.509260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.040 qpair failed and we were unable to recover it. 00:29:58.040 [2024-11-20 11:30:50.509466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.040 [2024-11-20 11:30:50.509495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.040 qpair failed and we were unable to recover it. 00:29:58.040 [2024-11-20 11:30:50.509840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.040 [2024-11-20 11:30:50.509870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.040 qpair failed and we were unable to recover it. 00:29:58.040 [2024-11-20 11:30:50.510204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.040 [2024-11-20 11:30:50.510235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.040 qpair failed and we were unable to recover it. 00:29:58.040 [2024-11-20 11:30:50.510620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.040 [2024-11-20 11:30:50.510651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.040 qpair failed and we were unable to recover it. 00:29:58.040 [2024-11-20 11:30:50.510891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.040 [2024-11-20 11:30:50.510920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.040 qpair failed and we were unable to recover it. 00:29:58.040 [2024-11-20 11:30:50.511259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.040 [2024-11-20 11:30:50.511291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.040 qpair failed and we were unable to recover it. 00:29:58.040 [2024-11-20 11:30:50.511604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.040 [2024-11-20 11:30:50.511634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.040 qpair failed and we were unable to recover it. 00:29:58.040 [2024-11-20 11:30:50.511869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.040 [2024-11-20 11:30:50.511898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.040 qpair failed and we were unable to recover it. 00:29:58.040 [2024-11-20 11:30:50.512239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.040 [2024-11-20 11:30:50.512271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.040 qpair failed and we were unable to recover it. 00:29:58.040 [2024-11-20 11:30:50.512571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.040 [2024-11-20 11:30:50.512603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.040 qpair failed and we were unable to recover it. 00:29:58.040 [2024-11-20 11:30:50.512931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.040 [2024-11-20 11:30:50.512961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.040 qpair failed and we were unable to recover it. 00:29:58.040 [2024-11-20 11:30:50.513148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.040 [2024-11-20 11:30:50.513191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.040 qpair failed and we were unable to recover it. 00:29:58.040 [2024-11-20 11:30:50.513544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.040 [2024-11-20 11:30:50.513573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.040 qpair failed and we were unable to recover it. 00:29:58.040 [2024-11-20 11:30:50.513904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.040 [2024-11-20 11:30:50.513935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.040 qpair failed and we were unable to recover it. 00:29:58.040 [2024-11-20 11:30:50.514292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.040 [2024-11-20 11:30:50.514324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.040 qpair failed and we were unable to recover it. 00:29:58.040 [2024-11-20 11:30:50.514646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.040 [2024-11-20 11:30:50.514677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.040 qpair failed and we were unable to recover it. 00:29:58.040 [2024-11-20 11:30:50.514885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.040 [2024-11-20 11:30:50.514915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.040 qpair failed and we were unable to recover it. 00:29:58.040 [2024-11-20 11:30:50.515242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.040 [2024-11-20 11:30:50.515272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.040 qpair failed and we were unable to recover it. 00:29:58.040 [2024-11-20 11:30:50.515473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.040 [2024-11-20 11:30:50.515501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.040 qpair failed and we were unable to recover it. 00:29:58.040 [2024-11-20 11:30:50.515711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.040 [2024-11-20 11:30:50.515741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.040 qpair failed and we were unable to recover it. 00:29:58.040 [2024-11-20 11:30:50.516073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.041 [2024-11-20 11:30:50.516103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.041 qpair failed and we were unable to recover it. 00:29:58.041 [2024-11-20 11:30:50.516454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.041 [2024-11-20 11:30:50.516484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.041 qpair failed and we were unable to recover it. 00:29:58.041 [2024-11-20 11:30:50.516837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.041 [2024-11-20 11:30:50.516866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.041 qpair failed and we were unable to recover it. 00:29:58.041 [2024-11-20 11:30:50.517059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.041 [2024-11-20 11:30:50.517088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.041 qpair failed and we were unable to recover it. 00:29:58.041 [2024-11-20 11:30:50.517437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.041 [2024-11-20 11:30:50.517467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.041 qpair failed and we were unable to recover it. 00:29:58.041 [2024-11-20 11:30:50.517812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.041 [2024-11-20 11:30:50.517842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.041 qpair failed and we were unable to recover it. 00:29:58.041 [2024-11-20 11:30:50.518099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.041 [2024-11-20 11:30:50.518133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.041 qpair failed and we were unable to recover it. 00:29:58.041 [2024-11-20 11:30:50.518482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.041 [2024-11-20 11:30:50.518513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.041 qpair failed and we were unable to recover it. 00:29:58.041 [2024-11-20 11:30:50.518861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.041 [2024-11-20 11:30:50.518891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.041 qpair failed and we were unable to recover it. 00:29:58.041 [2024-11-20 11:30:50.519110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.041 [2024-11-20 11:30:50.519174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.041 qpair failed and we were unable to recover it. 00:29:58.041 [2024-11-20 11:30:50.519552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.041 [2024-11-20 11:30:50.519582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.041 qpair failed and we were unable to recover it. 00:29:58.041 [2024-11-20 11:30:50.519938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.041 [2024-11-20 11:30:50.519966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.041 qpair failed and we were unable to recover it. 00:29:58.041 [2024-11-20 11:30:50.520282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.041 [2024-11-20 11:30:50.520314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.041 qpair failed and we were unable to recover it. 00:29:58.041 [2024-11-20 11:30:50.520646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.041 [2024-11-20 11:30:50.520676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.041 qpair failed and we were unable to recover it. 00:29:58.041 [2024-11-20 11:30:50.521024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.041 [2024-11-20 11:30:50.521053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.041 qpair failed and we were unable to recover it. 00:29:58.041 [2024-11-20 11:30:50.521323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.041 [2024-11-20 11:30:50.521353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.041 qpair failed and we were unable to recover it. 00:29:58.041 [2024-11-20 11:30:50.521597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.041 [2024-11-20 11:30:50.521630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.041 qpair failed and we were unable to recover it. 00:29:58.041 [2024-11-20 11:30:50.521955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.041 [2024-11-20 11:30:50.521984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.041 qpair failed and we were unable to recover it. 00:29:58.041 [2024-11-20 11:30:50.522324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.041 [2024-11-20 11:30:50.522354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.041 qpair failed and we were unable to recover it. 00:29:58.041 [2024-11-20 11:30:50.522718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.041 [2024-11-20 11:30:50.522748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.041 qpair failed and we were unable to recover it. 00:29:58.041 [2024-11-20 11:30:50.523081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.041 [2024-11-20 11:30:50.523110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.041 qpair failed and we were unable to recover it. 00:29:58.041 [2024-11-20 11:30:50.523484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.041 [2024-11-20 11:30:50.523515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.041 qpair failed and we were unable to recover it. 00:29:58.041 [2024-11-20 11:30:50.523861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.041 [2024-11-20 11:30:50.523890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.041 qpair failed and we were unable to recover it. 00:29:58.041 [2024-11-20 11:30:50.524240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.041 [2024-11-20 11:30:50.524272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.041 qpair failed and we were unable to recover it. 00:29:58.041 [2024-11-20 11:30:50.524641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.041 [2024-11-20 11:30:50.524671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.041 qpair failed and we were unable to recover it. 00:29:58.041 [2024-11-20 11:30:50.525024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.041 [2024-11-20 11:30:50.525053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.041 qpair failed and we were unable to recover it. 00:29:58.041 [2024-11-20 11:30:50.525397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.041 [2024-11-20 11:30:50.525427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.041 qpair failed and we were unable to recover it. 00:29:58.041 [2024-11-20 11:30:50.525785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.041 [2024-11-20 11:30:50.525814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.041 qpair failed and we were unable to recover it. 00:29:58.042 [2024-11-20 11:30:50.526146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.042 [2024-11-20 11:30:50.526183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.042 qpair failed and we were unable to recover it. 00:29:58.042 [2024-11-20 11:30:50.526531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.042 [2024-11-20 11:30:50.526560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.042 qpair failed and we were unable to recover it. 00:29:58.042 [2024-11-20 11:30:50.526888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.042 [2024-11-20 11:30:50.526919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.042 qpair failed and we were unable to recover it. 00:29:58.042 [2024-11-20 11:30:50.527251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.042 [2024-11-20 11:30:50.527282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.042 qpair failed and we were unable to recover it. 00:29:58.042 [2024-11-20 11:30:50.527630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.042 [2024-11-20 11:30:50.527659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.042 qpair failed and we were unable to recover it. 00:29:58.042 [2024-11-20 11:30:50.528008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.042 [2024-11-20 11:30:50.528038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.042 qpair failed and we were unable to recover it. 00:29:58.042 [2024-11-20 11:30:50.528394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.042 [2024-11-20 11:30:50.528426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.042 qpair failed and we were unable to recover it. 00:29:58.042 [2024-11-20 11:30:50.528771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.042 [2024-11-20 11:30:50.528800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.042 qpair failed and we were unable to recover it. 00:29:58.042 [2024-11-20 11:30:50.529135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.042 [2024-11-20 11:30:50.529175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.042 qpair failed and we were unable to recover it. 00:29:58.042 [2024-11-20 11:30:50.529533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.042 [2024-11-20 11:30:50.529565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.042 qpair failed and we were unable to recover it. 00:29:58.042 [2024-11-20 11:30:50.529764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.042 [2024-11-20 11:30:50.529793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.042 qpair failed and we were unable to recover it. 00:29:58.042 [2024-11-20 11:30:50.530139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.042 [2024-11-20 11:30:50.530176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.042 qpair failed and we were unable to recover it. 00:29:58.042 [2024-11-20 11:30:50.530543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.042 [2024-11-20 11:30:50.530572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.042 qpair failed and we were unable to recover it. 00:29:58.042 [2024-11-20 11:30:50.530922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.042 [2024-11-20 11:30:50.530951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.042 qpair failed and we were unable to recover it. 00:29:58.042 [2024-11-20 11:30:50.531177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.042 [2024-11-20 11:30:50.531207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.042 qpair failed and we were unable to recover it. 00:29:58.042 [2024-11-20 11:30:50.531471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.042 [2024-11-20 11:30:50.531500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.042 qpair failed and we were unable to recover it. 00:29:58.042 [2024-11-20 11:30:50.531824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.042 [2024-11-20 11:30:50.531853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.042 qpair failed and we were unable to recover it. 00:29:58.042 [2024-11-20 11:30:50.532053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.042 [2024-11-20 11:30:50.532083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.042 qpair failed and we were unable to recover it. 00:29:58.042 [2024-11-20 11:30:50.532442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.042 [2024-11-20 11:30:50.532473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.042 qpair failed and we were unable to recover it. 00:29:58.042 [2024-11-20 11:30:50.532836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.042 [2024-11-20 11:30:50.532865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.042 qpair failed and we were unable to recover it. 00:29:58.042 [2024-11-20 11:30:50.533091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.042 [2024-11-20 11:30:50.533120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.042 qpair failed and we were unable to recover it. 00:29:58.042 [2024-11-20 11:30:50.533488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.042 [2024-11-20 11:30:50.533524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.042 qpair failed and we were unable to recover it. 00:29:58.042 [2024-11-20 11:30:50.533862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.042 [2024-11-20 11:30:50.533892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.042 qpair failed and we were unable to recover it. 00:29:58.042 [2024-11-20 11:30:50.534236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.042 [2024-11-20 11:30:50.534266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.042 qpair failed and we were unable to recover it. 00:29:58.042 [2024-11-20 11:30:50.534498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.042 [2024-11-20 11:30:50.534526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.042 qpair failed and we were unable to recover it. 00:29:58.042 [2024-11-20 11:30:50.534874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.042 [2024-11-20 11:30:50.534905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.042 qpair failed and we were unable to recover it. 00:29:58.042 [2024-11-20 11:30:50.535134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.042 [2024-11-20 11:30:50.535169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.042 qpair failed and we were unable to recover it. 00:29:58.042 [2024-11-20 11:30:50.535538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.042 [2024-11-20 11:30:50.535566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.042 qpair failed and we were unable to recover it. 00:29:58.042 [2024-11-20 11:30:50.535919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.042 [2024-11-20 11:30:50.535948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.042 qpair failed and we were unable to recover it. 00:29:58.042 [2024-11-20 11:30:50.536304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.042 [2024-11-20 11:30:50.536335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.042 qpair failed and we were unable to recover it. 00:29:58.042 [2024-11-20 11:30:50.536679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.043 [2024-11-20 11:30:50.536709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.043 qpair failed and we were unable to recover it. 00:29:58.043 [2024-11-20 11:30:50.536982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.043 [2024-11-20 11:30:50.537011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.043 qpair failed and we were unable to recover it. 00:29:58.043 [2024-11-20 11:30:50.537334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.043 [2024-11-20 11:30:50.537364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.043 qpair failed and we were unable to recover it. 00:29:58.043 [2024-11-20 11:30:50.537562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.043 [2024-11-20 11:30:50.537589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.043 qpair failed and we were unable to recover it. 00:29:58.043 [2024-11-20 11:30:50.537941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.043 [2024-11-20 11:30:50.537970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.043 qpair failed and we were unable to recover it. 00:29:58.043 [2024-11-20 11:30:50.538243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.043 [2024-11-20 11:30:50.538272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.043 qpair failed and we were unable to recover it. 00:29:58.043 [2024-11-20 11:30:50.538580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.043 [2024-11-20 11:30:50.538608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.043 qpair failed and we were unable to recover it. 00:29:58.043 [2024-11-20 11:30:50.538993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.043 [2024-11-20 11:30:50.539022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.043 qpair failed and we were unable to recover it. 00:29:58.043 [2024-11-20 11:30:50.539377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.043 [2024-11-20 11:30:50.539407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.043 qpair failed and we were unable to recover it. 00:29:58.043 [2024-11-20 11:30:50.539757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.043 [2024-11-20 11:30:50.539785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.043 qpair failed and we were unable to recover it. 00:29:58.043 [2024-11-20 11:30:50.540149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.043 [2024-11-20 11:30:50.540190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.043 qpair failed and we were unable to recover it. 00:29:58.043 [2024-11-20 11:30:50.540415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.043 [2024-11-20 11:30:50.540443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.043 qpair failed and we were unable to recover it. 00:29:58.043 [2024-11-20 11:30:50.540788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.043 [2024-11-20 11:30:50.540816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.043 qpair failed and we were unable to recover it. 00:29:58.043 [2024-11-20 11:30:50.541157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.043 [2024-11-20 11:30:50.541199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.043 qpair failed and we were unable to recover it. 00:29:58.043 [2024-11-20 11:30:50.541536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.043 [2024-11-20 11:30:50.541565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.043 qpair failed and we were unable to recover it. 00:29:58.043 [2024-11-20 11:30:50.541785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.043 [2024-11-20 11:30:50.541814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.043 qpair failed and we were unable to recover it. 00:29:58.043 [2024-11-20 11:30:50.542157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.043 [2024-11-20 11:30:50.542198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.043 qpair failed and we were unable to recover it. 00:29:58.043 [2024-11-20 11:30:50.542435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.043 [2024-11-20 11:30:50.542462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.043 qpair failed and we were unable to recover it. 00:29:58.043 [2024-11-20 11:30:50.542699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.043 [2024-11-20 11:30:50.542727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.043 qpair failed and we were unable to recover it. 00:29:58.043 [2024-11-20 11:30:50.543061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.043 [2024-11-20 11:30:50.543091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.043 qpair failed and we were unable to recover it. 00:29:58.043 [2024-11-20 11:30:50.543304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.043 [2024-11-20 11:30:50.543335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.043 qpair failed and we were unable to recover it. 00:29:58.043 [2024-11-20 11:30:50.543692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.043 [2024-11-20 11:30:50.543722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.043 qpair failed and we were unable to recover it. 00:29:58.043 [2024-11-20 11:30:50.543960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.043 [2024-11-20 11:30:50.543991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.043 qpair failed and we were unable to recover it. 00:29:58.043 [2024-11-20 11:30:50.544336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.043 [2024-11-20 11:30:50.544366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.043 qpair failed and we were unable to recover it. 00:29:58.043 [2024-11-20 11:30:50.544596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.043 [2024-11-20 11:30:50.544625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.043 qpair failed and we were unable to recover it. 00:29:58.043 [2024-11-20 11:30:50.545009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.043 [2024-11-20 11:30:50.545038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.043 qpair failed and we were unable to recover it. 00:29:58.043 [2024-11-20 11:30:50.545364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.043 [2024-11-20 11:30:50.545395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.043 qpair failed and we were unable to recover it. 00:29:58.043 [2024-11-20 11:30:50.545587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.043 [2024-11-20 11:30:50.545615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.043 qpair failed and we were unable to recover it. 00:29:58.043 [2024-11-20 11:30:50.545956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.043 [2024-11-20 11:30:50.545986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.043 qpair failed and we were unable to recover it. 00:29:58.043 [2024-11-20 11:30:50.546245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.043 [2024-11-20 11:30:50.546274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.043 qpair failed and we were unable to recover it. 00:29:58.043 [2024-11-20 11:30:50.546615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.043 [2024-11-20 11:30:50.546643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.044 qpair failed and we were unable to recover it. 00:29:58.044 [2024-11-20 11:30:50.546973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.044 [2024-11-20 11:30:50.547009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.044 qpair failed and we were unable to recover it. 00:29:58.044 [2024-11-20 11:30:50.547321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.044 [2024-11-20 11:30:50.547351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.044 qpair failed and we were unable to recover it. 00:29:58.044 [2024-11-20 11:30:50.547693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.044 [2024-11-20 11:30:50.547722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.044 qpair failed and we were unable to recover it. 00:29:58.044 [2024-11-20 11:30:50.548068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.044 [2024-11-20 11:30:50.548098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.044 qpair failed and we were unable to recover it. 00:29:58.044 [2024-11-20 11:30:50.548461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.044 [2024-11-20 11:30:50.548491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.044 qpair failed and we were unable to recover it. 00:29:58.044 [2024-11-20 11:30:50.548688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.044 [2024-11-20 11:30:50.548716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.044 qpair failed and we were unable to recover it. 00:29:58.044 [2024-11-20 11:30:50.549062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.044 [2024-11-20 11:30:50.549090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.044 qpair failed and we were unable to recover it. 00:29:58.044 [2024-11-20 11:30:50.549331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.044 [2024-11-20 11:30:50.549360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.044 qpair failed and we were unable to recover it. 00:29:58.044 [2024-11-20 11:30:50.549702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.044 [2024-11-20 11:30:50.549731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.044 qpair failed and we were unable to recover it. 00:29:58.044 [2024-11-20 11:30:50.549938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.044 [2024-11-20 11:30:50.549968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.044 qpair failed and we were unable to recover it. 00:29:58.044 [2024-11-20 11:30:50.550317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.044 [2024-11-20 11:30:50.550347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.044 qpair failed and we were unable to recover it. 00:29:58.044 [2024-11-20 11:30:50.550692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.044 [2024-11-20 11:30:50.550720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.044 qpair failed and we were unable to recover it. 00:29:58.044 [2024-11-20 11:30:50.551060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.044 [2024-11-20 11:30:50.551089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.044 qpair failed and we were unable to recover it. 00:29:58.044 [2024-11-20 11:30:50.551337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.044 [2024-11-20 11:30:50.551367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.044 qpair failed and we were unable to recover it. 00:29:58.044 [2024-11-20 11:30:50.551715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.044 [2024-11-20 11:30:50.551745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.044 qpair failed and we were unable to recover it. 00:29:58.044 [2024-11-20 11:30:50.551988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.044 [2024-11-20 11:30:50.552016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.044 qpair failed and we were unable to recover it. 00:29:58.044 [2024-11-20 11:30:50.552400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.044 [2024-11-20 11:30:50.552429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.044 qpair failed and we were unable to recover it. 00:29:58.044 [2024-11-20 11:30:50.552789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.044 [2024-11-20 11:30:50.552818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.044 qpair failed and we were unable to recover it. 00:29:58.044 [2024-11-20 11:30:50.553196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.044 [2024-11-20 11:30:50.553228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.044 qpair failed and we were unable to recover it. 00:29:58.044 [2024-11-20 11:30:50.553558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.044 [2024-11-20 11:30:50.553587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.044 qpair failed and we were unable to recover it. 00:29:58.044 [2024-11-20 11:30:50.553810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.044 [2024-11-20 11:30:50.553839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.044 qpair failed and we were unable to recover it. 00:29:58.044 [2024-11-20 11:30:50.554076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.044 [2024-11-20 11:30:50.554104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.044 qpair failed and we were unable to recover it. 00:29:58.044 [2024-11-20 11:30:50.554445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.044 [2024-11-20 11:30:50.554475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.044 qpair failed and we were unable to recover it. 00:29:58.044 [2024-11-20 11:30:50.554706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.044 [2024-11-20 11:30:50.554734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.044 qpair failed and we were unable to recover it. 00:29:58.044 [2024-11-20 11:30:50.555006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.044 [2024-11-20 11:30:50.555035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.044 qpair failed and we were unable to recover it. 00:29:58.044 [2024-11-20 11:30:50.555369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.044 [2024-11-20 11:30:50.555400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.044 qpair failed and we were unable to recover it. 00:29:58.044 [2024-11-20 11:30:50.555629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.044 [2024-11-20 11:30:50.555657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.044 qpair failed and we were unable to recover it. 00:29:58.044 [2024-11-20 11:30:50.555999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.044 [2024-11-20 11:30:50.556030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.044 qpair failed and we were unable to recover it. 00:29:58.044 [2024-11-20 11:30:50.556244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.044 [2024-11-20 11:30:50.556275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.044 qpair failed and we were unable to recover it. 00:29:58.044 [2024-11-20 11:30:50.556639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.044 [2024-11-20 11:30:50.556668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.044 qpair failed and we were unable to recover it. 00:29:58.044 [2024-11-20 11:30:50.557008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.044 [2024-11-20 11:30:50.557035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.044 qpair failed and we were unable to recover it. 00:29:58.044 [2024-11-20 11:30:50.557404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.044 [2024-11-20 11:30:50.557433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.044 qpair failed and we were unable to recover it. 00:29:58.044 [2024-11-20 11:30:50.557767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.044 [2024-11-20 11:30:50.557796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.044 qpair failed and we were unable to recover it. 00:29:58.045 [2024-11-20 11:30:50.558100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-11-20 11:30:50.558128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-11-20 11:30:50.558326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-11-20 11:30:50.558356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-11-20 11:30:50.558599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-11-20 11:30:50.558627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-11-20 11:30:50.558847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-11-20 11:30:50.558876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-11-20 11:30:50.559145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-11-20 11:30:50.559182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-11-20 11:30:50.559375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-11-20 11:30:50.559404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-11-20 11:30:50.559745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-11-20 11:30:50.559772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-11-20 11:30:50.560120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-11-20 11:30:50.560149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-11-20 11:30:50.560524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-11-20 11:30:50.560555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-11-20 11:30:50.560775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-11-20 11:30:50.560804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-11-20 11:30:50.561025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-11-20 11:30:50.561052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-11-20 11:30:50.561413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-11-20 11:30:50.561443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-11-20 11:30:50.561531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-11-20 11:30:50.561559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-11-20 11:30:50.561898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-11-20 11:30:50.561928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-11-20 11:30:50.562258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-11-20 11:30:50.562288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-11-20 11:30:50.562631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-11-20 11:30:50.562660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-11-20 11:30:50.562864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-11-20 11:30:50.562892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-11-20 11:30:50.563092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-11-20 11:30:50.563120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-11-20 11:30:50.563458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-11-20 11:30:50.563488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-11-20 11:30:50.563693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-11-20 11:30:50.563720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-11-20 11:30:50.564057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-11-20 11:30:50.564086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-11-20 11:30:50.564334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-11-20 11:30:50.564368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-11-20 11:30:50.564625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-11-20 11:30:50.564658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-11-20 11:30:50.565013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-11-20 11:30:50.565043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.045 qpair failed and we were unable to recover it. 00:29:58.045 [2024-11-20 11:30:50.565272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.045 [2024-11-20 11:30:50.565303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-20 11:30:50.565555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-20 11:30:50.565582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-20 11:30:50.565821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-20 11:30:50.565849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-20 11:30:50.566065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-20 11:30:50.566093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-20 11:30:50.566302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-20 11:30:50.566332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-20 11:30:50.566680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-20 11:30:50.566709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-20 11:30:50.567065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-20 11:30:50.567094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-20 11:30:50.567433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-20 11:30:50.567462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-20 11:30:50.567816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-20 11:30:50.567844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-20 11:30:50.568194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-20 11:30:50.568223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-20 11:30:50.568580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-20 11:30:50.568616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-20 11:30:50.568883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-20 11:30:50.568912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-20 11:30:50.569279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-20 11:30:50.569310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-20 11:30:50.569684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-20 11:30:50.569714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-20 11:30:50.570077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-20 11:30:50.570106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-20 11:30:50.570484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-20 11:30:50.570515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-20 11:30:50.570878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-20 11:30:50.570908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-20 11:30:50.571005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-20 11:30:50.571032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-20 11:30:50.571527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-20 11:30:50.571618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-20 11:30:50.571985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-20 11:30:50.572022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-20 11:30:50.572377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-20 11:30:50.572412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-20 11:30:50.572758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-20 11:30:50.572788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-20 11:30:50.573010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-20 11:30:50.573039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-20 11:30:50.573343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-20 11:30:50.573375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-20 11:30:50.573619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-20 11:30:50.573648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-20 11:30:50.573999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-20 11:30:50.574028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-20 11:30:50.574263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-20 11:30:50.574294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-20 11:30:50.574541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-20 11:30:50.574571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-20 11:30:50.574901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-20 11:30:50.574931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-20 11:30:50.575287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-20 11:30:50.575319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-20 11:30:50.575523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-20 11:30:50.575552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-20 11:30:50.575918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-20 11:30:50.575947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-20 11:30:50.576193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-20 11:30:50.576224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-20 11:30:50.576547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.046 [2024-11-20 11:30:50.576575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.046 qpair failed and we were unable to recover it. 00:29:58.046 [2024-11-20 11:30:50.576928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-20 11:30:50.576959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-20 11:30:50.577327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-20 11:30:50.577360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-20 11:30:50.577702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-20 11:30:50.577732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-20 11:30:50.578097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-20 11:30:50.578133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-20 11:30:50.578387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-20 11:30:50.578417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-20 11:30:50.578745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-20 11:30:50.578775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-20 11:30:50.579121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-20 11:30:50.579150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-20 11:30:50.579371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-20 11:30:50.579401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-20 11:30:50.579597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-20 11:30:50.579626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-20 11:30:50.579926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-20 11:30:50.579958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-20 11:30:50.580224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-20 11:30:50.580259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-20 11:30:50.580591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-20 11:30:50.580620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-20 11:30:50.580831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-20 11:30:50.580860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-20 11:30:50.581079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-20 11:30:50.581108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-20 11:30:50.581479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-20 11:30:50.581512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-20 11:30:50.581860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-20 11:30:50.581889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-20 11:30:50.582103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-20 11:30:50.582132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-20 11:30:50.582524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-20 11:30:50.582554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-20 11:30:50.582901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-20 11:30:50.582930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-20 11:30:50.583303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-20 11:30:50.583334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-20 11:30:50.583709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-20 11:30:50.583739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-20 11:30:50.583950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-20 11:30:50.583979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-20 11:30:50.584370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-20 11:30:50.584401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-20 11:30:50.584772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-20 11:30:50.584801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-20 11:30:50.585149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-20 11:30:50.585193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-20 11:30:50.585410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-20 11:30:50.585440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-20 11:30:50.585783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-20 11:30:50.585812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-20 11:30:50.586078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-20 11:30:50.586108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-20 11:30:50.586326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-20 11:30:50.586360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-20 11:30:50.586718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-20 11:30:50.586746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-20 11:30:50.586982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-20 11:30:50.587017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-20 11:30:50.587246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.047 [2024-11-20 11:30:50.587277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.047 qpair failed and we were unable to recover it. 00:29:58.047 [2024-11-20 11:30:50.587603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-20 11:30:50.587632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-20 11:30:50.587870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-20 11:30:50.587898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-20 11:30:50.588232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-20 11:30:50.588261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-20 11:30:50.588620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-20 11:30:50.588649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-20 11:30:50.588983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-20 11:30:50.589011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-20 11:30:50.589264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-20 11:30:50.589295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-20 11:30:50.589537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-20 11:30:50.589564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-20 11:30:50.589777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-20 11:30:50.589805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-20 11:30:50.590115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-20 11:30:50.590143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-20 11:30:50.590415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-20 11:30:50.590443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-20 11:30:50.590754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-20 11:30:50.590784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-20 11:30:50.591018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-20 11:30:50.591050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-20 11:30:50.591391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-20 11:30:50.591422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-20 11:30:50.591752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-20 11:30:50.591781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-20 11:30:50.592121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-20 11:30:50.592150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-20 11:30:50.592528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-20 11:30:50.592557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-20 11:30:50.592893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-20 11:30:50.592922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-20 11:30:50.593140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-20 11:30:50.593188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-20 11:30:50.593551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-20 11:30:50.593581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-20 11:30:50.593914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-20 11:30:50.593943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-20 11:30:50.594181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-20 11:30:50.594212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-20 11:30:50.594627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-20 11:30:50.594663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-20 11:30:50.595003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-20 11:30:50.595037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-20 11:30:50.595393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-20 11:30:50.595424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-20 11:30:50.595760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-20 11:30:50.595790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-20 11:30:50.596014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-20 11:30:50.596043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-20 11:30:50.596361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-20 11:30:50.596393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-20 11:30:50.596727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-20 11:30:50.596756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-20 11:30:50.597094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-20 11:30:50.597122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-20 11:30:50.597330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-20 11:30:50.597360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-20 11:30:50.597691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-20 11:30:50.597719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-20 11:30:50.598052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-20 11:30:50.598082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.048 [2024-11-20 11:30:50.598446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.048 [2024-11-20 11:30:50.598478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.048 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-20 11:30:50.598703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-20 11:30:50.598731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-20 11:30:50.599066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-20 11:30:50.599096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-20 11:30:50.599404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-20 11:30:50.599435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-20 11:30:50.599751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-20 11:30:50.599780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-20 11:30:50.600000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-20 11:30:50.600028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-20 11:30:50.600272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-20 11:30:50.600303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-20 11:30:50.600653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-20 11:30:50.600689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-20 11:30:50.600838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-20 11:30:50.600867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-20 11:30:50.601208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-20 11:30:50.601238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-20 11:30:50.601455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-20 11:30:50.601483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-20 11:30:50.601750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-20 11:30:50.601780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-20 11:30:50.601986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-20 11:30:50.602015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-20 11:30:50.602343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-20 11:30:50.602374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-20 11:30:50.602711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-20 11:30:50.602741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-20 11:30:50.603082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-20 11:30:50.603110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-20 11:30:50.603467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-20 11:30:50.603496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-20 11:30:50.603857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-20 11:30:50.603886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-20 11:30:50.604152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-20 11:30:50.604190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-20 11:30:50.604588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-20 11:30:50.604616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-20 11:30:50.604827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-20 11:30:50.604855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-20 11:30:50.605196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-20 11:30:50.605226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-20 11:30:50.605600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-20 11:30:50.605629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-20 11:30:50.605967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-20 11:30:50.605996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-20 11:30:50.606242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-20 11:30:50.606274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-20 11:30:50.606628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-20 11:30:50.606657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-20 11:30:50.606989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-20 11:30:50.607019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-20 11:30:50.607259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-20 11:30:50.607289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-20 11:30:50.607635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-20 11:30:50.607664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-20 11:30:50.607883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-20 11:30:50.607911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-20 11:30:50.608107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-20 11:30:50.608135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-20 11:30:50.608542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-20 11:30:50.608572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-20 11:30:50.608917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-20 11:30:50.608946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-20 11:30:50.609218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-20 11:30:50.609247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.049 [2024-11-20 11:30:50.609609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.049 [2024-11-20 11:30:50.609638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.049 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-20 11:30:50.609978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-20 11:30:50.610008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-20 11:30:50.610391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-20 11:30:50.610422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-20 11:30:50.610758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-20 11:30:50.610787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-20 11:30:50.611007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-20 11:30:50.611035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-20 11:30:50.611376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-20 11:30:50.611407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-20 11:30:50.611752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-20 11:30:50.611781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-20 11:30:50.612127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-20 11:30:50.612155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-20 11:30:50.612252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-20 11:30:50.612280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd170c0 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-20 11:30:50.612656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-20 11:30:50.612763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-20 11:30:50.613191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-20 11:30:50.613232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-20 11:30:50.613446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-20 11:30:50.613475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-20 11:30:50.613740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-20 11:30:50.613770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-20 11:30:50.614092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-20 11:30:50.614121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-20 11:30:50.614406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-20 11:30:50.614438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-20 11:30:50.614781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-20 11:30:50.614810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-20 11:30:50.615171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-20 11:30:50.615203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-20 11:30:50.615436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-20 11:30:50.615465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-20 11:30:50.615822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-20 11:30:50.615851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-20 11:30:50.616206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-20 11:30:50.616238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-20 11:30:50.616443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-20 11:30:50.616471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-20 11:30:50.616823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-20 11:30:50.616853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-20 11:30:50.617053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-20 11:30:50.617082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-20 11:30:50.617433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-20 11:30:50.617464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-20 11:30:50.617676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-20 11:30:50.617705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-20 11:30:50.618092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-20 11:30:50.618122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-20 11:30:50.618501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-20 11:30:50.618531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-20 11:30:50.618765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-20 11:30:50.618800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.050 [2024-11-20 11:30:50.618941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.050 [2024-11-20 11:30:50.618978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.050 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-20 11:30:50.619185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-20 11:30:50.619217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-20 11:30:50.619430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-20 11:30:50.619461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-20 11:30:50.619668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-20 11:30:50.619696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-20 11:30:50.619786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-20 11:30:50.619813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-20 11:30:50.620123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-20 11:30:50.620152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-20 11:30:50.620442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-20 11:30:50.620470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-20 11:30:50.620690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-20 11:30:50.620721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-20 11:30:50.621098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-20 11:30:50.621127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-20 11:30:50.621463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-20 11:30:50.621493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-20 11:30:50.621828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-20 11:30:50.621858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-20 11:30:50.622205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-20 11:30:50.622236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-20 11:30:50.622464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-20 11:30:50.622494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-20 11:30:50.622847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-20 11:30:50.622877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-20 11:30:50.623230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-20 11:30:50.623260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-20 11:30:50.623614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-20 11:30:50.623643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-20 11:30:50.623976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-20 11:30:50.624005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-20 11:30:50.624336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-20 11:30:50.624367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-20 11:30:50.624603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-20 11:30:50.624631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-20 11:30:50.624854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-20 11:30:50.624882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-20 11:30:50.625210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-20 11:30:50.625240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-20 11:30:50.625611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-20 11:30:50.625642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-20 11:30:50.625751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-20 11:30:50.625780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-20 11:30:50.626118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-20 11:30:50.626148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-20 11:30:50.626378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-20 11:30:50.626407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-20 11:30:50.626728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-20 11:30:50.626758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-20 11:30:50.627099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-20 11:30:50.627129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-20 11:30:50.627525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-20 11:30:50.627555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-20 11:30:50.627778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-20 11:30:50.627806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-20 11:30:50.628039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-20 11:30:50.628067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-20 11:30:50.628293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-20 11:30:50.628324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-20 11:30:50.628671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-20 11:30:50.628700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-20 11:30:50.629036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.051 [2024-11-20 11:30:50.629065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.051 qpair failed and we were unable to recover it. 00:29:58.051 [2024-11-20 11:30:50.629419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-20 11:30:50.629448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-20 11:30:50.629791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-20 11:30:50.629820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-20 11:30:50.630188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-20 11:30:50.630220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-20 11:30:50.630565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-20 11:30:50.630593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-20 11:30:50.630939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-20 11:30:50.630967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-20 11:30:50.631312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-20 11:30:50.631342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-20 11:30:50.631690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-20 11:30:50.631725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-20 11:30:50.632057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-20 11:30:50.632087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-20 11:30:50.632499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-20 11:30:50.632530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-20 11:30:50.632660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-20 11:30:50.632692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-20 11:30:50.633068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-20 11:30:50.633098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-20 11:30:50.633457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-20 11:30:50.633488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-20 11:30:50.633903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-20 11:30:50.633932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-20 11:30:50.634283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-20 11:30:50.634312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-20 11:30:50.634607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-20 11:30:50.634636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-20 11:30:50.634968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-20 11:30:50.634997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-20 11:30:50.635359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-20 11:30:50.635389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-20 11:30:50.635742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-20 11:30:50.635770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-20 11:30:50.635979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-20 11:30:50.636008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-20 11:30:50.636125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-20 11:30:50.636155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-20 11:30:50.636392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-20 11:30:50.636422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-20 11:30:50.636664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-20 11:30:50.636692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-20 11:30:50.637045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-20 11:30:50.637074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-20 11:30:50.637440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-20 11:30:50.637470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-20 11:30:50.637668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-20 11:30:50.637696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-20 11:30:50.638052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-20 11:30:50.638081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-20 11:30:50.638417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-20 11:30:50.638448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-20 11:30:50.638697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-20 11:30:50.638730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-20 11:30:50.639065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-20 11:30:50.639095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-20 11:30:50.639462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-20 11:30:50.639493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-20 11:30:50.639691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-20 11:30:50.639719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-20 11:30:50.640071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-20 11:30:50.640099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-20 11:30:50.640460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.052 [2024-11-20 11:30:50.640491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.052 qpair failed and we were unable to recover it. 00:29:58.052 [2024-11-20 11:30:50.640847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-20 11:30:50.640878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-20 11:30:50.641094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-20 11:30:50.641123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-20 11:30:50.641504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-20 11:30:50.641536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-20 11:30:50.641871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-20 11:30:50.641899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-20 11:30:50.642249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-20 11:30:50.642279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-20 11:30:50.642642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-20 11:30:50.642671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-20 11:30:50.643029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-20 11:30:50.643059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-20 11:30:50.643267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-20 11:30:50.643298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-20 11:30:50.643652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-20 11:30:50.643681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-20 11:30:50.643896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-20 11:30:50.643927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-20 11:30:50.644255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-20 11:30:50.644286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-20 11:30:50.644648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-20 11:30:50.644677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-20 11:30:50.645019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-20 11:30:50.645049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-20 11:30:50.645465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-20 11:30:50.645501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-20 11:30:50.645827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-20 11:30:50.645857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-20 11:30:50.646211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-20 11:30:50.646241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-20 11:30:50.646587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-20 11:30:50.646616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-20 11:30:50.646823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-20 11:30:50.646850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-20 11:30:50.647075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-20 11:30:50.647106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-20 11:30:50.647455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-20 11:30:50.647485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-20 11:30:50.647691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-20 11:30:50.647720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-20 11:30:50.647933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-20 11:30:50.647964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-20 11:30:50.648296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-20 11:30:50.648327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-20 11:30:50.648715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-20 11:30:50.648744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-20 11:30:50.649106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-20 11:30:50.649135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-20 11:30:50.649508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-20 11:30:50.649539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-20 11:30:50.649879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-20 11:30:50.649907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-20 11:30:50.650091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-20 11:30:50.650119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-20 11:30:50.650485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-20 11:30:50.650516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-20 11:30:50.650871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-20 11:30:50.650900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-20 11:30:50.651125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.053 [2024-11-20 11:30:50.651153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.053 qpair failed and we were unable to recover it. 00:29:58.053 [2024-11-20 11:30:50.651499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-20 11:30:50.651528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-20 11:30:50.651709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-20 11:30:50.651740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-20 11:30:50.652099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-20 11:30:50.652129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-20 11:30:50.652367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-20 11:30:50.652397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-20 11:30:50.652753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-20 11:30:50.652782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-20 11:30:50.652983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-20 11:30:50.653010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-20 11:30:50.653324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-20 11:30:50.653354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-20 11:30:50.653554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-20 11:30:50.653583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-20 11:30:50.653923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-20 11:30:50.653951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-20 11:30:50.654303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-20 11:30:50.654334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-20 11:30:50.654674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-20 11:30:50.654704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-20 11:30:50.655032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-20 11:30:50.655060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-20 11:30:50.655269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-20 11:30:50.655299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-20 11:30:50.655659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-20 11:30:50.655689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-20 11:30:50.656022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-20 11:30:50.656050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-20 11:30:50.656419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-20 11:30:50.656450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-20 11:30:50.656749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-20 11:30:50.656778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-20 11:30:50.657139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-20 11:30:50.657175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-20 11:30:50.657529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-20 11:30:50.657558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-20 11:30:50.657788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-20 11:30:50.657816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-20 11:30:50.658016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-20 11:30:50.658044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-20 11:30:50.658142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-20 11:30:50.658179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-20 11:30:50.658526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-20 11:30:50.658562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-20 11:30:50.658854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-20 11:30:50.658882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-20 11:30:50.659114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-20 11:30:50.659144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-20 11:30:50.659384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-20 11:30:50.659413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-20 11:30:50.659768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-20 11:30:50.659797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-20 11:30:50.660010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-20 11:30:50.660037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-20 11:30:50.660244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-20 11:30:50.660275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-20 11:30:50.660625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-20 11:30:50.660653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-20 11:30:50.661006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-20 11:30:50.661036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.054 qpair failed and we were unable to recover it. 00:29:58.054 [2024-11-20 11:30:50.661393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.054 [2024-11-20 11:30:50.661423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-20 11:30:50.661765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-20 11:30:50.661795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-20 11:30:50.662147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-20 11:30:50.662185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-20 11:30:50.662533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-20 11:30:50.662561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-20 11:30:50.662906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-20 11:30:50.662935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-20 11:30:50.663151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-20 11:30:50.663189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-20 11:30:50.663509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-20 11:30:50.663538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-20 11:30:50.663885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-20 11:30:50.663914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-20 11:30:50.664135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-20 11:30:50.664182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-20 11:30:50.664422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-20 11:30:50.664450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-20 11:30:50.664763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-20 11:30:50.664791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-20 11:30:50.665121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-20 11:30:50.665149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-20 11:30:50.665505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-20 11:30:50.665535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-20 11:30:50.665891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-20 11:30:50.665919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-20 11:30:50.666265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-20 11:30:50.666295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-20 11:30:50.666642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-20 11:30:50.666671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-20 11:30:50.667016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-20 11:30:50.667044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-20 11:30:50.667253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-20 11:30:50.667283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-20 11:30:50.667486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-20 11:30:50.667516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-20 11:30:50.667872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-20 11:30:50.667901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-20 11:30:50.668281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-20 11:30:50.668311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-20 11:30:50.668517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-20 11:30:50.668547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-20 11:30:50.668877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-20 11:30:50.668906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-20 11:30:50.669148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-20 11:30:50.669184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-20 11:30:50.669535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-20 11:30:50.669564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-20 11:30:50.669919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-20 11:30:50.669947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-20 11:30:50.670124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-20 11:30:50.670152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-20 11:30:50.670502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-20 11:30:50.670532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-20 11:30:50.670973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-20 11:30:50.671002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-20 11:30:50.671192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-20 11:30:50.671221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-20 11:30:50.671578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-20 11:30:50.671606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-20 11:30:50.671943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-20 11:30:50.671978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-20 11:30:50.672209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-20 11:30:50.672240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.055 [2024-11-20 11:30:50.672553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.055 [2024-11-20 11:30:50.672581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.055 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-20 11:30:50.672919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-20 11:30:50.672948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-20 11:30:50.673331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-20 11:30:50.673361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-20 11:30:50.673563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-20 11:30:50.673590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-20 11:30:50.673801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-20 11:30:50.673830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-20 11:30:50.674157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-20 11:30:50.674198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-20 11:30:50.674539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-20 11:30:50.674567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-20 11:30:50.674942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-20 11:30:50.674971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-20 11:30:50.675304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-20 11:30:50.675335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-20 11:30:50.675677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-20 11:30:50.675706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-20 11:30:50.676060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-20 11:30:50.676089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-20 11:30:50.676315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-20 11:30:50.676345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-20 11:30:50.676554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-20 11:30:50.676583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-20 11:30:50.677009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-20 11:30:50.677038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-20 11:30:50.677384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-20 11:30:50.677414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-20 11:30:50.677758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-20 11:30:50.677787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-20 11:30:50.677995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-20 11:30:50.678022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-20 11:30:50.678367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-20 11:30:50.678397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-20 11:30:50.678746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-20 11:30:50.678775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-20 11:30:50.678988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-20 11:30:50.679015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-20 11:30:50.679353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-20 11:30:50.679382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-20 11:30:50.679732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-20 11:30:50.679762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-20 11:30:50.679852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-20 11:30:50.679879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.056 qpair failed and we were unable to recover it. 00:29:58.056 [2024-11-20 11:30:50.680123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.056 [2024-11-20 11:30:50.680152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-20 11:30:50.680383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-20 11:30:50.680412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-20 11:30:50.680676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-20 11:30:50.680705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-20 11:30:50.681040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-20 11:30:50.681069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-20 11:30:50.681406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-20 11:30:50.681436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-20 11:30:50.681650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-20 11:30:50.681678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-20 11:30:50.681913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-20 11:30:50.681942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-20 11:30:50.682268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-20 11:30:50.682298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-20 11:30:50.682633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-20 11:30:50.682661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-20 11:30:50.682874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-20 11:30:50.682903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-20 11:30:50.683127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-20 11:30:50.683154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-20 11:30:50.683263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-20 11:30:50.683293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-20 11:30:50.683638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-20 11:30:50.683667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-20 11:30:50.683864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-20 11:30:50.683892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-20 11:30:50.684271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-20 11:30:50.684301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-20 11:30:50.684551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-20 11:30:50.684585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-20 11:30:50.684814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-20 11:30:50.684843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-20 11:30:50.685180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-20 11:30:50.685211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-20 11:30:50.685563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-20 11:30:50.685592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-20 11:30:50.685903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-20 11:30:50.685932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-20 11:30:50.686276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-20 11:30:50.686306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-20 11:30:50.686553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-20 11:30:50.686581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-20 11:30:50.686806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-20 11:30:50.686835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-20 11:30:50.687140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-20 11:30:50.687176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-20 11:30:50.687525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-20 11:30:50.687554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-20 11:30:50.687905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-20 11:30:50.687934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-20 11:30:50.688287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-20 11:30:50.688317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-20 11:30:50.688665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-20 11:30:50.688694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-20 11:30:50.689023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-20 11:30:50.689051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-20 11:30:50.689474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-20 11:30:50.689505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-20 11:30:50.689842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-20 11:30:50.689870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.057 qpair failed and we were unable to recover it. 00:29:58.057 [2024-11-20 11:30:50.690214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.057 [2024-11-20 11:30:50.690245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-20 11:30:50.690580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-20 11:30:50.690608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-20 11:30:50.690815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-20 11:30:50.690842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 11:30:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:58.058 [2024-11-20 11:30:50.691169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-20 11:30:50.691200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 11:30:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:58.058 [2024-11-20 11:30:50.691457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-20 11:30:50.691488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 11:30:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:58.058 [2024-11-20 11:30:50.691830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-20 11:30:50.691860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 11:30:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:58.058 [2024-11-20 11:30:50.692056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-20 11:30:50.692085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 11:30:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.058 [2024-11-20 11:30:50.692444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-20 11:30:50.692475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-20 11:30:50.692786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-20 11:30:50.692814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-20 11:30:50.693137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-20 11:30:50.693174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-20 11:30:50.693553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-20 11:30:50.693582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-20 11:30:50.693926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-20 11:30:50.693955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-20 11:30:50.694329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-20 11:30:50.694360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-20 11:30:50.694701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-20 11:30:50.694731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-20 11:30:50.695113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-20 11:30:50.695143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-20 11:30:50.695364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-20 11:30:50.695392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-20 11:30:50.695733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-20 11:30:50.695762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-20 11:30:50.696102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-20 11:30:50.696130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-20 11:30:50.696366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-20 11:30:50.696399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-20 11:30:50.696736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-20 11:30:50.696766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-20 11:30:50.697094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-20 11:30:50.697122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-20 11:30:50.697507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-20 11:30:50.697537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-20 11:30:50.697879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-20 11:30:50.697915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-20 11:30:50.698180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-20 11:30:50.698210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-20 11:30:50.698303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-20 11:30:50.698331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa9a4000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-20 11:30:50.698512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-20 11:30:50.698596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-20 11:30:50.698812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-20 11:30:50.698844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-20 11:30:50.699069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-20 11:30:50.699103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.058 [2024-11-20 11:30:50.699580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.058 [2024-11-20 11:30:50.699670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.058 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-20 11:30:50.699948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-20 11:30:50.699985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-20 11:30:50.700355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-20 11:30:50.700389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-20 11:30:50.700753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-20 11:30:50.700783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-20 11:30:50.701010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-20 11:30:50.701039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-20 11:30:50.701379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-20 11:30:50.701411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-20 11:30:50.701613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-20 11:30:50.701643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-20 11:30:50.701981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-20 11:30:50.702010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-20 11:30:50.702298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-20 11:30:50.702331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-20 11:30:50.702551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-20 11:30:50.702581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-20 11:30:50.702941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-20 11:30:50.702973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-20 11:30:50.703318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-20 11:30:50.703348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-20 11:30:50.703724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-20 11:30:50.703754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-20 11:30:50.704083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-20 11:30:50.704114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-20 11:30:50.704371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-20 11:30:50.704407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-20 11:30:50.704751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-20 11:30:50.704780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-20 11:30:50.705089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-20 11:30:50.705119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-20 11:30:50.705369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-20 11:30:50.705401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-20 11:30:50.705753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-20 11:30:50.705783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-20 11:30:50.706103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-20 11:30:50.706132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-20 11:30:50.706451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-20 11:30:50.706481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-20 11:30:50.706812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-20 11:30:50.706842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-20 11:30:50.707238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-20 11:30:50.707271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-20 11:30:50.707622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-20 11:30:50.707652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-20 11:30:50.708020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-20 11:30:50.708049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-20 11:30:50.708242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-20 11:30:50.708271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-20 11:30:50.708497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-20 11:30:50.708526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-20 11:30:50.708848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-20 11:30:50.708878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.059 [2024-11-20 11:30:50.709085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.059 [2024-11-20 11:30:50.709114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.059 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-20 11:30:50.709493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-20 11:30:50.709525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-20 11:30:50.709743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-20 11:30:50.709772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-20 11:30:50.709985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-20 11:30:50.710016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-20 11:30:50.710391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-20 11:30:50.710423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-20 11:30:50.710646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-20 11:30:50.710675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-20 11:30:50.711038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-20 11:30:50.711079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-20 11:30:50.711457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-20 11:30:50.711490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-20 11:30:50.711858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-20 11:30:50.711888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-20 11:30:50.712225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-20 11:30:50.712254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-20 11:30:50.712588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-20 11:30:50.712617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-20 11:30:50.712969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-20 11:30:50.712998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-20 11:30:50.713342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-20 11:30:50.713372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-20 11:30:50.713714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-20 11:30:50.713743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-20 11:30:50.714123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-20 11:30:50.714152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-20 11:30:50.714395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-20 11:30:50.714427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-20 11:30:50.714773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-20 11:30:50.714803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-20 11:30:50.715062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-20 11:30:50.715092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-20 11:30:50.715450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-20 11:30:50.715481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-20 11:30:50.715824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-20 11:30:50.715854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-20 11:30:50.716195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-20 11:30:50.716225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-20 11:30:50.716568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-20 11:30:50.716598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-20 11:30:50.716938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-20 11:30:50.716966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-20 11:30:50.717200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-20 11:30:50.717231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-20 11:30:50.717623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-20 11:30:50.717653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-20 11:30:50.717744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-20 11:30:50.717772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-20 11:30:50.718102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-20 11:30:50.718131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-20 11:30:50.718516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-20 11:30:50.718546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-20 11:30:50.718887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-20 11:30:50.718916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.060 qpair failed and we were unable to recover it. 00:29:58.060 [2024-11-20 11:30:50.719227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.060 [2024-11-20 11:30:50.719259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-20 11:30:50.719583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-20 11:30:50.719612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-20 11:30:50.719869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-20 11:30:50.719902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-20 11:30:50.720216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-20 11:30:50.720246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-20 11:30:50.720448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-20 11:30:50.720478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-20 11:30:50.720854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-20 11:30:50.720884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-20 11:30:50.721215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-20 11:30:50.721245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-20 11:30:50.721467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-20 11:30:50.721494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-20 11:30:50.721860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-20 11:30:50.721889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-20 11:30:50.722243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-20 11:30:50.722273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-20 11:30:50.722460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-20 11:30:50.722488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-20 11:30:50.722823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-20 11:30:50.722852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-20 11:30:50.723056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-20 11:30:50.723085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-20 11:30:50.723459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-20 11:30:50.723490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-20 11:30:50.723795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-20 11:30:50.723824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-20 11:30:50.724022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-20 11:30:50.724050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-20 11:30:50.724399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-20 11:30:50.724429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-20 11:30:50.724621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-20 11:30:50.724657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-20 11:30:50.724989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-20 11:30:50.725020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-20 11:30:50.725234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-20 11:30:50.725265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-20 11:30:50.725612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-20 11:30:50.725641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-20 11:30:50.725986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-20 11:30:50.726015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-20 11:30:50.726235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-20 11:30:50.726266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-20 11:30:50.726631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-20 11:30:50.726660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-20 11:30:50.726988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-20 11:30:50.727017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-20 11:30:50.727218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-20 11:30:50.727248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-20 11:30:50.727466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-20 11:30:50.727495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-20 11:30:50.727603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-20 11:30:50.727633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.061 [2024-11-20 11:30:50.727961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.061 [2024-11-20 11:30:50.727993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.061 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-20 11:30:50.728332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-20 11:30:50.728362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-20 11:30:50.728690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-20 11:30:50.728718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-20 11:30:50.729062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-20 11:30:50.729091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-20 11:30:50.729448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-20 11:30:50.729480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-20 11:30:50.729837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-20 11:30:50.729866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-20 11:30:50.730215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-20 11:30:50.730246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-20 11:30:50.730476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-20 11:30:50.730505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-20 11:30:50.730847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-20 11:30:50.730876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-20 11:30:50.731217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-20 11:30:50.731249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 11:30:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:58.062 [2024-11-20 11:30:50.731588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-20 11:30:50.731618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 11:30:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:58.062 [2024-11-20 11:30:50.731961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-20 11:30:50.731990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 11:30:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.062 [2024-11-20 11:30:50.732335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-20 11:30:50.732365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 11:30:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.062 [2024-11-20 11:30:50.732596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-20 11:30:50.732625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-20 11:30:50.732851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-20 11:30:50.732887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-20 11:30:50.733223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-20 11:30:50.733253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-20 11:30:50.733600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-20 11:30:50.733632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-20 11:30:50.733974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-20 11:30:50.734002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-20 11:30:50.734370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-20 11:30:50.734399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-20 11:30:50.734642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-20 11:30:50.734674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-20 11:30:50.735000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-20 11:30:50.735030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-20 11:30:50.735375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-20 11:30:50.735406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-20 11:30:50.735757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-20 11:30:50.735786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-20 11:30:50.736127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-20 11:30:50.736157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-20 11:30:50.736537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.062 [2024-11-20 11:30:50.736566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.062 qpair failed and we were unable to recover it. 00:29:58.062 [2024-11-20 11:30:50.736792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-20 11:30:50.736821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-20 11:30:50.737141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-20 11:30:50.737194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-20 11:30:50.737533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-20 11:30:50.737562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-20 11:30:50.737916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-20 11:30:50.737945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-20 11:30:50.738166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-20 11:30:50.738196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-20 11:30:50.738505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-20 11:30:50.738534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-20 11:30:50.738855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-20 11:30:50.738885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-20 11:30:50.739092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-20 11:30:50.739120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-20 11:30:50.739510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-20 11:30:50.739540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-20 11:30:50.739889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-20 11:30:50.739919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-20 11:30:50.740274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-20 11:30:50.740304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-20 11:30:50.740658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-20 11:30:50.740687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-20 11:30:50.741080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-20 11:30:50.741110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-20 11:30:50.741463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-20 11:30:50.741493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-20 11:30:50.741824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-20 11:30:50.741854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-20 11:30:50.742213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-20 11:30:50.742243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-20 11:30:50.742614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-20 11:30:50.742644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-20 11:30:50.742998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-20 11:30:50.743026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-20 11:30:50.743349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-20 11:30:50.743379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-20 11:30:50.743720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-20 11:30:50.743750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-20 11:30:50.744101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-20 11:30:50.744129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-20 11:30:50.744487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-20 11:30:50.744519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-20 11:30:50.744845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-20 11:30:50.744875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-20 11:30:50.745086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-20 11:30:50.745118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-20 11:30:50.745507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-20 11:30:50.745538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-20 11:30:50.745897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-20 11:30:50.745927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-20 11:30:50.746179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-20 11:30:50.746208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-20 11:30:50.746524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-20 11:30:50.746554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.063 [2024-11-20 11:30:50.746893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.063 [2024-11-20 11:30:50.746923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.063 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-20 11:30:50.747267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-20 11:30:50.747303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-20 11:30:50.747642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-20 11:30:50.747673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-20 11:30:50.747996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-20 11:30:50.748026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-20 11:30:50.748383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-20 11:30:50.748412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-20 11:30:50.748770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-20 11:30:50.748799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-20 11:30:50.749123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-20 11:30:50.749153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-20 11:30:50.749471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-20 11:30:50.749503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-20 11:30:50.749828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-20 11:30:50.749858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-20 11:30:50.750085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-20 11:30:50.750114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-20 11:30:50.750497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-20 11:30:50.750527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-20 11:30:50.750918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-20 11:30:50.750947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-20 11:30:50.751273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-20 11:30:50.751303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-20 11:30:50.751514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-20 11:30:50.751545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-20 11:30:50.751869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-20 11:30:50.751898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-20 11:30:50.752238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-20 11:30:50.752269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-20 11:30:50.752598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-20 11:30:50.752627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-20 11:30:50.752861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-20 11:30:50.752892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-20 11:30:50.753227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-20 11:30:50.753258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-20 11:30:50.753499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-20 11:30:50.753527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-20 11:30:50.753858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-20 11:30:50.753887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-20 11:30:50.754106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-20 11:30:50.754135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-20 11:30:50.754498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-20 11:30:50.754527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-20 11:30:50.754861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-20 11:30:50.754890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-20 11:30:50.755112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-20 11:30:50.755140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-20 11:30:50.755395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-20 11:30:50.755425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-20 11:30:50.755752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-20 11:30:50.755780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-20 11:30:50.756110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-20 11:30:50.756140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-20 11:30:50.756524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-20 11:30:50.756553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-20 11:30:50.756882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-20 11:30:50.756911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-20 11:30:50.757219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-20 11:30:50.757249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.064 qpair failed and we were unable to recover it. 00:29:58.064 [2024-11-20 11:30:50.757623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.064 [2024-11-20 11:30:50.757652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.330 qpair failed and we were unable to recover it. 00:29:58.330 [2024-11-20 11:30:50.757889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.330 [2024-11-20 11:30:50.757922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.330 qpair failed and we were unable to recover it. 00:29:58.330 [2024-11-20 11:30:50.758177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.330 [2024-11-20 11:30:50.758207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.330 qpair failed and we were unable to recover it. 00:29:58.330 [2024-11-20 11:30:50.758544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.330 [2024-11-20 11:30:50.758574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.330 qpair failed and we were unable to recover it. 00:29:58.330 [2024-11-20 11:30:50.758809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.330 [2024-11-20 11:30:50.758839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.330 qpair failed and we were unable to recover it. 00:29:58.330 [2024-11-20 11:30:50.759173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.330 [2024-11-20 11:30:50.759203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.330 qpair failed and we were unable to recover it. 00:29:58.330 [2024-11-20 11:30:50.759551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.330 [2024-11-20 11:30:50.759582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.330 qpair failed and we were unable to recover it. 00:29:58.330 [2024-11-20 11:30:50.759910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.330 [2024-11-20 11:30:50.759939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.330 qpair failed and we were unable to recover it. 00:29:58.330 [2024-11-20 11:30:50.760280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.330 [2024-11-20 11:30:50.760310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.330 qpair failed and we were unable to recover it. 00:29:58.330 [2024-11-20 11:30:50.760648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.330 [2024-11-20 11:30:50.760678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.330 qpair failed and we were unable to recover it. 00:29:58.330 [2024-11-20 11:30:50.761033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.330 [2024-11-20 11:30:50.761070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.330 qpair failed and we were unable to recover it. 00:29:58.330 [2024-11-20 11:30:50.761407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.330 [2024-11-20 11:30:50.761439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.330 qpair failed and we were unable to recover it. 00:29:58.330 [2024-11-20 11:30:50.761775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.330 [2024-11-20 11:30:50.761805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.330 qpair failed and we were unable to recover it. 00:29:58.330 [2024-11-20 11:30:50.762149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.330 [2024-11-20 11:30:50.762189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.330 qpair failed and we were unable to recover it. 00:29:58.330 [2024-11-20 11:30:50.762491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.330 [2024-11-20 11:30:50.762520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.330 qpair failed and we were unable to recover it. 00:29:58.330 [2024-11-20 11:30:50.762738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.330 [2024-11-20 11:30:50.762767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.330 qpair failed and we were unable to recover it. 00:29:58.330 [2024-11-20 11:30:50.763112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.330 [2024-11-20 11:30:50.763141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.330 qpair failed and we were unable to recover it. 00:29:58.330 [2024-11-20 11:30:50.763375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.330 [2024-11-20 11:30:50.763405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.330 qpair failed and we were unable to recover it. 00:29:58.330 [2024-11-20 11:30:50.763745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.330 [2024-11-20 11:30:50.763774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.330 qpair failed and we were unable to recover it. 00:29:58.330 [2024-11-20 11:30:50.763977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.330 [2024-11-20 11:30:50.764007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.330 qpair failed and we were unable to recover it. 00:29:58.330 [2024-11-20 11:30:50.764343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.330 [2024-11-20 11:30:50.764373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.330 qpair failed and we were unable to recover it. 00:29:58.330 Malloc0 00:29:58.330 [2024-11-20 11:30:50.764766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.330 [2024-11-20 11:30:50.764796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.330 qpair failed and we were unable to recover it. 00:29:58.330 [2024-11-20 11:30:50.765125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.330 [2024-11-20 11:30:50.765155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.330 qpair failed and we were unable to recover it. 00:29:58.330 [2024-11-20 11:30:50.765491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.330 11:30:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.330 [2024-11-20 11:30:50.765521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.330 qpair failed and we were unable to recover it. 00:29:58.330 [2024-11-20 11:30:50.765818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.330 [2024-11-20 11:30:50.765850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.330 qpair failed and we were unable to recover it. 00:29:58.330 11:30:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:58.330 [2024-11-20 11:30:50.766192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.330 11:30:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.330 [2024-11-20 11:30:50.766224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.330 qpair failed and we were unable to recover it. 00:29:58.330 11:30:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.330 [2024-11-20 11:30:50.766553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.330 [2024-11-20 11:30:50.766582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.330 qpair failed and we were unable to recover it. 00:29:58.330 [2024-11-20 11:30:50.766923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.330 [2024-11-20 11:30:50.766954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.330 qpair failed and we were unable to recover it. 00:29:58.330 [2024-11-20 11:30:50.767166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.330 [2024-11-20 11:30:50.767197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.330 qpair failed and we were unable to recover it. 00:29:58.330 [2024-11-20 11:30:50.767514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.330 [2024-11-20 11:30:50.767542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.330 qpair failed and we were unable to recover it. 00:29:58.330 [2024-11-20 11:30:50.767760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.330 [2024-11-20 11:30:50.767788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.331 qpair failed and we were unable to recover it. 00:29:58.331 [2024-11-20 11:30:50.768114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.331 [2024-11-20 11:30:50.768143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.331 qpair failed and we were unable to recover it. 00:29:58.331 [2024-11-20 11:30:50.768505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.331 [2024-11-20 11:30:50.768535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.331 qpair failed and we were unable to recover it. 00:29:58.331 [2024-11-20 11:30:50.768880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.331 [2024-11-20 11:30:50.768909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.331 qpair failed and we were unable to recover it. 00:29:58.331 [2024-11-20 11:30:50.769282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.331 [2024-11-20 11:30:50.769313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.331 qpair failed and we were unable to recover it. 00:29:58.331 [2024-11-20 11:30:50.769514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.331 [2024-11-20 11:30:50.769542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.331 qpair failed and we were unable to recover it. 00:29:58.331 [2024-11-20 11:30:50.769877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.331 [2024-11-20 11:30:50.769907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.331 qpair failed and we were unable to recover it. 00:29:58.331 [2024-11-20 11:30:50.770264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.331 [2024-11-20 11:30:50.770294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.331 qpair failed and we were unable to recover it. 00:29:58.331 [2024-11-20 11:30:50.770635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.331 [2024-11-20 11:30:50.770666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.331 qpair failed and we were unable to recover it. 00:29:58.331 [2024-11-20 11:30:50.771044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.331 [2024-11-20 11:30:50.771072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.331 qpair failed and we were unable to recover it. 00:29:58.331 [2024-11-20 11:30:50.771401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.331 [2024-11-20 11:30:50.771431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.331 qpair failed and we were unable to recover it. 00:29:58.331 [2024-11-20 11:30:50.771643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.331 [2024-11-20 11:30:50.771671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.331 qpair failed and we were unable to recover it. 00:29:58.331 [2024-11-20 11:30:50.771910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.331 [2024-11-20 11:30:50.771939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.331 qpair failed and we were unable to recover it. 00:29:58.331 [2024-11-20 11:30:50.772242] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:58.331 [2024-11-20 11:30:50.772267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.331 [2024-11-20 11:30:50.772295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.331 qpair failed and we were unable to recover it. 00:29:58.331 [2024-11-20 11:30:50.772640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.331 [2024-11-20 11:30:50.772669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.331 qpair failed and we were unable to recover it. 00:29:58.331 [2024-11-20 11:30:50.772881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.331 [2024-11-20 11:30:50.772910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.331 qpair failed and we were unable to recover it. 00:29:58.331 [2024-11-20 11:30:50.773241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.331 [2024-11-20 11:30:50.773272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.331 qpair failed and we were unable to recover it. 00:29:58.331 [2024-11-20 11:30:50.773632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.331 [2024-11-20 11:30:50.773660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.331 qpair failed and we were unable to recover it. 00:29:58.331 [2024-11-20 11:30:50.774000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.331 [2024-11-20 11:30:50.774035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.331 qpair failed and we were unable to recover it. 00:29:58.331 [2024-11-20 11:30:50.774240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.331 [2024-11-20 11:30:50.774269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.331 qpair failed and we were unable to recover it. 00:29:58.331 [2024-11-20 11:30:50.774573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.331 [2024-11-20 11:30:50.774603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.331 qpair failed and we were unable to recover it. 00:29:58.331 [2024-11-20 11:30:50.774828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.331 [2024-11-20 11:30:50.774857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.331 qpair failed and we were unable to recover it. 00:29:58.331 [2024-11-20 11:30:50.775210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.331 [2024-11-20 11:30:50.775240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.331 qpair failed and we were unable to recover it. 00:29:58.331 [2024-11-20 11:30:50.775593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.331 [2024-11-20 11:30:50.775623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.331 qpair failed and we were unable to recover it. 00:29:58.331 [2024-11-20 11:30:50.775960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.331 [2024-11-20 11:30:50.775989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.331 qpair failed and we were unable to recover it. 00:29:58.331 [2024-11-20 11:30:50.776329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.331 [2024-11-20 11:30:50.776358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.331 qpair failed and we were unable to recover it. 00:29:58.331 [2024-11-20 11:30:50.776692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.331 [2024-11-20 11:30:50.776730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.331 qpair failed and we were unable to recover it. 00:29:58.331 [2024-11-20 11:30:50.777094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.331 [2024-11-20 11:30:50.777123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.331 qpair failed and we were unable to recover it. 00:29:58.331 [2024-11-20 11:30:50.777426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.331 [2024-11-20 11:30:50.777457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.331 qpair failed and we were unable to recover it. 00:29:58.331 [2024-11-20 11:30:50.777679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.331 [2024-11-20 11:30:50.777708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.331 qpair failed and we were unable to recover it. 00:29:58.331 [2024-11-20 11:30:50.778065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.331 [2024-11-20 11:30:50.778097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.331 qpair failed and we were unable to recover it. 00:29:58.331 [2024-11-20 11:30:50.778293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.331 [2024-11-20 11:30:50.778324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.331 qpair failed and we were unable to recover it. 00:29:58.331 [2024-11-20 11:30:50.778676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.331 [2024-11-20 11:30:50.778706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.331 qpair failed and we were unable to recover it. 00:29:58.331 [2024-11-20 11:30:50.779045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.331 [2024-11-20 11:30:50.779074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.331 qpair failed and we were unable to recover it. 00:29:58.331 [2024-11-20 11:30:50.779453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.331 [2024-11-20 11:30:50.779484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.331 qpair failed and we were unable to recover it. 00:29:58.331 [2024-11-20 11:30:50.779848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.331 [2024-11-20 11:30:50.779878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.331 qpair failed and we were unable to recover it. 00:29:58.332 [2024-11-20 11:30:50.780209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.332 [2024-11-20 11:30:50.780238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.332 qpair failed and we were unable to recover it. 00:29:58.332 [2024-11-20 11:30:50.780579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.332 [2024-11-20 11:30:50.780608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.332 qpair failed and we were unable to recover it. 00:29:58.332 [2024-11-20 11:30:50.780872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.332 [2024-11-20 11:30:50.780900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.332 qpair failed and we were unable to recover it. 00:29:58.332 [2024-11-20 11:30:50.781135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.332 [2024-11-20 11:30:50.781203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.332 qpair failed and we were unable to recover it. 00:29:58.332 11:30:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.332 [2024-11-20 11:30:50.781581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.332 [2024-11-20 11:30:50.781610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.332 qpair failed and we were unable to recover it. 00:29:58.332 11:30:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:58.332 [2024-11-20 11:30:50.781956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.332 [2024-11-20 11:30:50.781986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b9 11:30:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.332 0 with addr=10.0.0.2, port=4420 00:29:58.332 qpair failed and we were unable to recover it. 00:29:58.332 11:30:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.332 [2024-11-20 11:30:50.782328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.332 [2024-11-20 11:30:50.782358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.332 qpair failed and we were unable to recover it. 00:29:58.332 [2024-11-20 11:30:50.782750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.332 [2024-11-20 11:30:50.782781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.332 qpair failed and we were unable to recover it. 00:29:58.332 [2024-11-20 11:30:50.783114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.332 [2024-11-20 11:30:50.783144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.332 qpair failed and we were unable to recover it. 00:29:58.332 [2024-11-20 11:30:50.783383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.332 [2024-11-20 11:30:50.783415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.332 qpair failed and we were unable to recover it. 00:29:58.332 [2024-11-20 11:30:50.783788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.332 [2024-11-20 11:30:50.783818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.332 qpair failed and we were unable to recover it. 00:29:58.332 [2024-11-20 11:30:50.784053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.332 [2024-11-20 11:30:50.784082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.332 qpair failed and we were unable to recover it. 00:29:58.332 [2024-11-20 11:30:50.784336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.332 [2024-11-20 11:30:50.784366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.332 qpair failed and we were unable to recover it. 00:29:58.332 [2024-11-20 11:30:50.784605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.332 [2024-11-20 11:30:50.784633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.332 qpair failed and we were unable to recover it. 00:29:58.332 [2024-11-20 11:30:50.784968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.332 [2024-11-20 11:30:50.784997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.332 qpair failed and we were unable to recover it. 00:29:58.332 [2024-11-20 11:30:50.785336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.332 [2024-11-20 11:30:50.785365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.332 qpair failed and we were unable to recover it. 00:29:58.332 [2024-11-20 11:30:50.785725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.332 [2024-11-20 11:30:50.785754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.332 qpair failed and we were unable to recover it. 00:29:58.332 [2024-11-20 11:30:50.786114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.332 [2024-11-20 11:30:50.786144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.332 qpair failed and we were unable to recover it. 00:29:58.332 [2024-11-20 11:30:50.786497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.332 [2024-11-20 11:30:50.786526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.332 qpair failed and we were unable to recover it. 00:29:58.332 [2024-11-20 11:30:50.786747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.332 [2024-11-20 11:30:50.786775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.332 qpair failed and we were unable to recover it. 00:29:58.332 [2024-11-20 11:30:50.787002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.332 [2024-11-20 11:30:50.787037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.332 qpair failed and we were unable to recover it. 00:29:58.332 [2024-11-20 11:30:50.787289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.332 [2024-11-20 11:30:50.787319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.332 qpair failed and we were unable to recover it. 00:29:58.332 [2024-11-20 11:30:50.787663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.332 [2024-11-20 11:30:50.787692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.332 qpair failed and we were unable to recover it. 00:29:58.332 [2024-11-20 11:30:50.788050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.332 [2024-11-20 11:30:50.788078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.332 qpair failed and we were unable to recover it. 00:29:58.332 [2024-11-20 11:30:50.788348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.332 [2024-11-20 11:30:50.788377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.332 qpair failed and we were unable to recover it. 00:29:58.332 [2024-11-20 11:30:50.788725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.332 [2024-11-20 11:30:50.788754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.332 qpair failed and we were unable to recover it. 00:29:58.332 [2024-11-20 11:30:50.788996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.332 [2024-11-20 11:30:50.789025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.332 qpair failed and we were unable to recover it. 00:29:58.332 [2024-11-20 11:30:50.789402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.332 [2024-11-20 11:30:50.789432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.332 qpair failed and we were unable to recover it. 00:29:58.332 [2024-11-20 11:30:50.789787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.332 [2024-11-20 11:30:50.789815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.332 qpair failed and we were unable to recover it. 00:29:58.332 [2024-11-20 11:30:50.790028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.332 [2024-11-20 11:30:50.790056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.332 qpair failed and we were unable to recover it. 00:29:58.332 [2024-11-20 11:30:50.790435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.332 [2024-11-20 11:30:50.790465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.332 qpair failed and we were unable to recover it. 00:29:58.332 [2024-11-20 11:30:50.790719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.332 [2024-11-20 11:30:50.790747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.332 qpair failed and we were unable to recover it. 00:29:58.332 [2024-11-20 11:30:50.790967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.332 [2024-11-20 11:30:50.790997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.332 qpair failed and we were unable to recover it. 00:29:58.332 [2024-11-20 11:30:50.791345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.332 [2024-11-20 11:30:50.791376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.332 qpair failed and we were unable to recover it. 00:29:58.332 [2024-11-20 11:30:50.791577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.332 [2024-11-20 11:30:50.791604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.332 qpair failed and we were unable to recover it. 00:29:58.332 [2024-11-20 11:30:50.791839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.332 [2024-11-20 11:30:50.791867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.332 qpair failed and we were unable to recover it. 00:29:58.333 [2024-11-20 11:30:50.792203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.333 [2024-11-20 11:30:50.792233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.333 qpair failed and we were unable to recover it. 00:29:58.333 [2024-11-20 11:30:50.792556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.333 [2024-11-20 11:30:50.792584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.333 qpair failed and we were unable to recover it. 00:29:58.333 [2024-11-20 11:30:50.792936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.333 [2024-11-20 11:30:50.792965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.333 qpair failed and we were unable to recover it. 00:29:58.333 [2024-11-20 11:30:50.793184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.333 [2024-11-20 11:30:50.793215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.333 qpair failed and we were unable to recover it. 00:29:58.333 11:30:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.333 [2024-11-20 11:30:50.793614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.333 [2024-11-20 11:30:50.793645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.333 qpair failed and we were unable to recover it. 00:29:58.333 11:30:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:58.333 [2024-11-20 11:30:50.793965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.333 11:30:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.333 [2024-11-20 11:30:50.793995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.333 qpair failed and we were unable to recover it. 00:29:58.333 [2024-11-20 11:30:50.794206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.333 [2024-11-20 11:30:50.794237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.333 11:30:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.333 qpair failed and we were unable to recover it. 00:29:58.333 [2024-11-20 11:30:50.794527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.333 [2024-11-20 11:30:50.794555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.333 qpair failed and we were unable to recover it. 00:29:58.333 [2024-11-20 11:30:50.794796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.333 [2024-11-20 11:30:50.794825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.333 qpair failed and we were unable to recover it. 00:29:58.333 [2024-11-20 11:30:50.795170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.333 [2024-11-20 11:30:50.795201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.333 qpair failed and we were unable to recover it. 00:29:58.333 [2024-11-20 11:30:50.795550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.333 [2024-11-20 11:30:50.795580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.333 qpair failed and we were unable to recover it. 00:29:58.333 [2024-11-20 11:30:50.795928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.333 [2024-11-20 11:30:50.795958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.333 qpair failed and we were unable to recover it. 00:29:58.333 [2024-11-20 11:30:50.796309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.333 [2024-11-20 11:30:50.796339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.333 qpair failed and we were unable to recover it. 00:29:58.333 [2024-11-20 11:30:50.796685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.333 [2024-11-20 11:30:50.796715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.333 qpair failed and we were unable to recover it. 00:29:58.333 [2024-11-20 11:30:50.797062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.333 [2024-11-20 11:30:50.797090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.333 qpair failed and we were unable to recover it. 00:29:58.333 [2024-11-20 11:30:50.797451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.333 [2024-11-20 11:30:50.797481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.333 qpair failed and we were unable to recover it. 00:29:58.333 [2024-11-20 11:30:50.797725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.333 [2024-11-20 11:30:50.797752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.333 qpair failed and we were unable to recover it. 00:29:58.333 [2024-11-20 11:30:50.798123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.333 [2024-11-20 11:30:50.798152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.333 qpair failed and we were unable to recover it. 00:29:58.333 [2024-11-20 11:30:50.798493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.333 [2024-11-20 11:30:50.798522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.333 qpair failed and we were unable to recover it. 00:29:58.333 [2024-11-20 11:30:50.798880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.333 [2024-11-20 11:30:50.798910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.333 qpair failed and we were unable to recover it. 00:29:58.333 [2024-11-20 11:30:50.799285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.333 [2024-11-20 11:30:50.799315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.333 qpair failed and we were unable to recover it. 00:29:58.333 [2024-11-20 11:30:50.799690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.333 [2024-11-20 11:30:50.799720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.333 qpair failed and we were unable to recover it. 00:29:58.333 [2024-11-20 11:30:50.800052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.333 [2024-11-20 11:30:50.800088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.333 qpair failed and we were unable to recover it. 00:29:58.333 [2024-11-20 11:30:50.800454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.333 [2024-11-20 11:30:50.800484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.333 qpair failed and we were unable to recover it. 00:29:58.333 [2024-11-20 11:30:50.800829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.333 [2024-11-20 11:30:50.800859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.333 qpair failed and we were unable to recover it. 00:29:58.333 [2024-11-20 11:30:50.801217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.333 [2024-11-20 11:30:50.801246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.333 qpair failed and we were unable to recover it. 00:29:58.333 [2024-11-20 11:30:50.801615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.333 [2024-11-20 11:30:50.801643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.333 qpair failed and we were unable to recover it. 00:29:58.333 [2024-11-20 11:30:50.801973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.333 [2024-11-20 11:30:50.802002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.333 qpair failed and we were unable to recover it. 00:29:58.333 [2024-11-20 11:30:50.802349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.333 [2024-11-20 11:30:50.802379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.333 qpair failed and we were unable to recover it. 00:29:58.333 [2024-11-20 11:30:50.802725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.333 [2024-11-20 11:30:50.802754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.333 qpair failed and we were unable to recover it. 00:29:58.333 [2024-11-20 11:30:50.803098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.333 [2024-11-20 11:30:50.803126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.333 qpair failed and we were unable to recover it. 00:29:58.333 [2024-11-20 11:30:50.803361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.333 [2024-11-20 11:30:50.803391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.333 qpair failed and we were unable to recover it. 00:29:58.333 [2024-11-20 11:30:50.803735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.333 [2024-11-20 11:30:50.803764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.333 qpair failed and we were unable to recover it. 00:29:58.333 [2024-11-20 11:30:50.804060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.333 [2024-11-20 11:30:50.804089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.333 qpair failed and we were unable to recover it. 00:29:58.333 [2024-11-20 11:30:50.804327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.333 [2024-11-20 11:30:50.804357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.333 qpair failed and we were unable to recover it. 00:29:58.333 [2024-11-20 11:30:50.804666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.333 [2024-11-20 11:30:50.804695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.333 qpair failed and we were unable to recover it. 00:29:58.334 [2024-11-20 11:30:50.805125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.334 [2024-11-20 11:30:50.805155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.334 qpair failed and we were unable to recover it. 00:29:58.334 11:30:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.334 [2024-11-20 11:30:50.805489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.334 [2024-11-20 11:30:50.805518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.334 qpair failed and we were unable to recover it. 00:29:58.334 11:30:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:58.334 [2024-11-20 11:30:50.805846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.334 [2024-11-20 11:30:50.805875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.334 qpair failed and we were unable to recover it. 00:29:58.334 [2024-11-20 11:30:50.806113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.334 11:30:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.334 [2024-11-20 11:30:50.806141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.334 qpair failed and we were unable to recover it. 00:29:58.334 [2024-11-20 11:30:50.806391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.334 11:30:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.334 [2024-11-20 11:30:50.806420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.334 qpair failed and we were unable to recover it. 00:29:58.334 [2024-11-20 11:30:50.806757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.334 [2024-11-20 11:30:50.806785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.334 qpair failed and we were unable to recover it. 00:29:58.334 [2024-11-20 11:30:50.807128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.334 [2024-11-20 11:30:50.807165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.334 qpair failed and we were unable to recover it. 00:29:58.334 [2024-11-20 11:30:50.807511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.334 [2024-11-20 11:30:50.807540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.334 qpair failed and we were unable to recover it. 00:29:58.334 [2024-11-20 11:30:50.807868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.334 [2024-11-20 11:30:50.807897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.334 qpair failed and we were unable to recover it. 00:29:58.334 [2024-11-20 11:30:50.808233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.334 [2024-11-20 11:30:50.808263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.334 qpair failed and we were unable to recover it. 00:29:58.334 [2024-11-20 11:30:50.808611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.334 [2024-11-20 11:30:50.808640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.334 qpair failed and we were unable to recover it. 00:29:58.334 [2024-11-20 11:30:50.808981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.334 [2024-11-20 11:30:50.809016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.334 qpair failed and we were unable to recover it. 00:29:58.334 [2024-11-20 11:30:50.809393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.334 [2024-11-20 11:30:50.809423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.334 qpair failed and we were unable to recover it. 00:29:58.334 [2024-11-20 11:30:50.809669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.334 [2024-11-20 11:30:50.809697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.334 qpair failed and we were unable to recover it. 00:29:58.334 [2024-11-20 11:30:50.810042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.334 [2024-11-20 11:30:50.810071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.334 qpair failed and we were unable to recover it. 00:29:58.334 [2024-11-20 11:30:50.810451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.334 [2024-11-20 11:30:50.810482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.334 qpair failed and we were unable to recover it. 00:29:58.334 [2024-11-20 11:30:50.810814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.334 [2024-11-20 11:30:50.810843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.334 qpair failed and we were unable to recover it. 00:29:58.334 [2024-11-20 11:30:50.811195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.334 [2024-11-20 11:30:50.811224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.334 qpair failed and we were unable to recover it. 00:29:58.334 [2024-11-20 11:30:50.811577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.334 [2024-11-20 11:30:50.811606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.334 qpair failed and we were unable to recover it. 00:29:58.334 [2024-11-20 11:30:50.811947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.334 [2024-11-20 11:30:50.811976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.334 qpair failed and we were unable to recover it. 00:29:58.334 [2024-11-20 11:30:50.812216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.334 [2024-11-20 11:30:50.812247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa99c000b90 with addr=10.0.0.2, port=4420 00:29:58.334 qpair failed and we were unable to recover it. 00:29:58.334 [2024-11-20 11:30:50.812513] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:58.334 11:30:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.334 11:30:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:58.334 11:30:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.334 11:30:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.334 [2024-11-20 11:30:50.823238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.334 [2024-11-20 11:30:50.823362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.334 [2024-11-20 11:30:50.823409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.334 [2024-11-20 11:30:50.823440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.334 [2024-11-20 11:30:50.823461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.334 [2024-11-20 11:30:50.823513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.334 qpair failed and we were unable to recover it. 00:29:58.334 11:30:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.334 11:30:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2925519 00:29:58.334 [2024-11-20 11:30:50.833022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.334 [2024-11-20 11:30:50.833128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.334 [2024-11-20 11:30:50.833155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.334 [2024-11-20 11:30:50.833179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.334 [2024-11-20 11:30:50.833193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.334 [2024-11-20 11:30:50.833223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.335 qpair failed and we were unable to recover it. 00:29:58.335 [2024-11-20 11:30:50.843102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.335 [2024-11-20 11:30:50.843165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.335 [2024-11-20 11:30:50.843184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.335 [2024-11-20 11:30:50.843194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.335 [2024-11-20 11:30:50.843203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.335 [2024-11-20 11:30:50.843223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.335 qpair failed and we were unable to recover it. 00:29:58.335 [2024-11-20 11:30:50.853142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.335 [2024-11-20 11:30:50.853212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.335 [2024-11-20 11:30:50.853225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.335 [2024-11-20 11:30:50.853233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.335 [2024-11-20 11:30:50.853239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.335 [2024-11-20 11:30:50.853253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.335 qpair failed and we were unable to recover it. 00:29:58.335 [2024-11-20 11:30:50.862993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.335 [2024-11-20 11:30:50.863054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.335 [2024-11-20 11:30:50.863067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.335 [2024-11-20 11:30:50.863078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.335 [2024-11-20 11:30:50.863085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.335 [2024-11-20 11:30:50.863099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.335 qpair failed and we were unable to recover it. 00:29:58.335 [2024-11-20 11:30:50.873100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.335 [2024-11-20 11:30:50.873163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.335 [2024-11-20 11:30:50.873176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.335 [2024-11-20 11:30:50.873184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.335 [2024-11-20 11:30:50.873191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.335 [2024-11-20 11:30:50.873205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.335 qpair failed and we were unable to recover it. 00:29:58.335 [2024-11-20 11:30:50.883095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.335 [2024-11-20 11:30:50.883152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.335 [2024-11-20 11:30:50.883171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.335 [2024-11-20 11:30:50.883178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.335 [2024-11-20 11:30:50.883185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.335 [2024-11-20 11:30:50.883200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.335 qpair failed and we were unable to recover it. 00:29:58.335 [2024-11-20 11:30:50.893043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.335 [2024-11-20 11:30:50.893134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.335 [2024-11-20 11:30:50.893149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.335 [2024-11-20 11:30:50.893156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.335 [2024-11-20 11:30:50.893169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.335 [2024-11-20 11:30:50.893184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.335 qpair failed and we were unable to recover it. 00:29:58.335 [2024-11-20 11:30:50.903103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.335 [2024-11-20 11:30:50.903171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.335 [2024-11-20 11:30:50.903185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.335 [2024-11-20 11:30:50.903193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.335 [2024-11-20 11:30:50.903199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.335 [2024-11-20 11:30:50.903217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.335 qpair failed and we were unable to recover it. 00:29:58.335 [2024-11-20 11:30:50.913225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.335 [2024-11-20 11:30:50.913284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.335 [2024-11-20 11:30:50.913299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.335 [2024-11-20 11:30:50.913307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.335 [2024-11-20 11:30:50.913314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.335 [2024-11-20 11:30:50.913332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.335 qpair failed and we were unable to recover it. 00:29:58.335 [2024-11-20 11:30:50.923193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.335 [2024-11-20 11:30:50.923247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.335 [2024-11-20 11:30:50.923261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.335 [2024-11-20 11:30:50.923268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.335 [2024-11-20 11:30:50.923275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.335 [2024-11-20 11:30:50.923290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.335 qpair failed and we were unable to recover it. 00:29:58.335 [2024-11-20 11:30:50.933257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.335 [2024-11-20 11:30:50.933315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.335 [2024-11-20 11:30:50.933328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.335 [2024-11-20 11:30:50.933335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.335 [2024-11-20 11:30:50.933342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.335 [2024-11-20 11:30:50.933356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.335 qpair failed and we were unable to recover it. 00:29:58.335 [2024-11-20 11:30:50.943320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.335 [2024-11-20 11:30:50.943382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.335 [2024-11-20 11:30:50.943395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.335 [2024-11-20 11:30:50.943402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.335 [2024-11-20 11:30:50.943409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.335 [2024-11-20 11:30:50.943423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.335 qpair failed and we were unable to recover it. 00:29:58.335 [2024-11-20 11:30:50.953303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.335 [2024-11-20 11:30:50.953355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.335 [2024-11-20 11:30:50.953368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.335 [2024-11-20 11:30:50.953376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.335 [2024-11-20 11:30:50.953382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.335 [2024-11-20 11:30:50.953397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.335 qpair failed and we were unable to recover it. 00:29:58.335 [2024-11-20 11:30:50.963325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.335 [2024-11-20 11:30:50.963378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.335 [2024-11-20 11:30:50.963391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.335 [2024-11-20 11:30:50.963398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.335 [2024-11-20 11:30:50.963405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.335 [2024-11-20 11:30:50.963419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.335 qpair failed and we were unable to recover it. 00:29:58.336 [2024-11-20 11:30:50.973402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.336 [2024-11-20 11:30:50.973486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.336 [2024-11-20 11:30:50.973499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.336 [2024-11-20 11:30:50.973506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.336 [2024-11-20 11:30:50.973514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.336 [2024-11-20 11:30:50.973528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.336 qpair failed and we were unable to recover it. 00:29:58.336 [2024-11-20 11:30:50.983417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.336 [2024-11-20 11:30:50.983471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.336 [2024-11-20 11:30:50.983484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.336 [2024-11-20 11:30:50.983491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.336 [2024-11-20 11:30:50.983498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.336 [2024-11-20 11:30:50.983512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.336 qpair failed and we were unable to recover it. 00:29:58.336 [2024-11-20 11:30:50.993455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.336 [2024-11-20 11:30:50.993508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.336 [2024-11-20 11:30:50.993521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.336 [2024-11-20 11:30:50.993531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.336 [2024-11-20 11:30:50.993538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.336 [2024-11-20 11:30:50.993552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.336 qpair failed and we were unable to recover it. 00:29:58.336 [2024-11-20 11:30:51.003452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.336 [2024-11-20 11:30:51.003499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.336 [2024-11-20 11:30:51.003513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.336 [2024-11-20 11:30:51.003520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.336 [2024-11-20 11:30:51.003527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.336 [2024-11-20 11:30:51.003541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.336 qpair failed and we were unable to recover it. 00:29:58.336 [2024-11-20 11:30:51.013513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.336 [2024-11-20 11:30:51.013570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.336 [2024-11-20 11:30:51.013583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.336 [2024-11-20 11:30:51.013590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.336 [2024-11-20 11:30:51.013597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.336 [2024-11-20 11:30:51.013611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.336 qpair failed and we were unable to recover it. 00:29:58.336 [2024-11-20 11:30:51.023552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.336 [2024-11-20 11:30:51.023604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.336 [2024-11-20 11:30:51.023617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.336 [2024-11-20 11:30:51.023624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.336 [2024-11-20 11:30:51.023631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.336 [2024-11-20 11:30:51.023645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.336 qpair failed and we were unable to recover it. 00:29:58.336 [2024-11-20 11:30:51.033544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.336 [2024-11-20 11:30:51.033594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.336 [2024-11-20 11:30:51.033607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.336 [2024-11-20 11:30:51.033614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.336 [2024-11-20 11:30:51.033621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.336 [2024-11-20 11:30:51.033638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.336 qpair failed and we were unable to recover it. 00:29:58.336 [2024-11-20 11:30:51.043659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.336 [2024-11-20 11:30:51.043714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.336 [2024-11-20 11:30:51.043728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.336 [2024-11-20 11:30:51.043735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.336 [2024-11-20 11:30:51.043742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.336 [2024-11-20 11:30:51.043757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.336 qpair failed and we were unable to recover it. 00:29:58.336 [2024-11-20 11:30:51.053652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.336 [2024-11-20 11:30:51.053706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.336 [2024-11-20 11:30:51.053720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.336 [2024-11-20 11:30:51.053727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.336 [2024-11-20 11:30:51.053733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.336 [2024-11-20 11:30:51.053747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.336 qpair failed and we were unable to recover it. 00:29:58.336 [2024-11-20 11:30:51.063676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.336 [2024-11-20 11:30:51.063733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.336 [2024-11-20 11:30:51.063746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.336 [2024-11-20 11:30:51.063754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.336 [2024-11-20 11:30:51.063760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.336 [2024-11-20 11:30:51.063774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.336 qpair failed and we were unable to recover it. 00:29:58.599 [2024-11-20 11:30:51.073702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.599 [2024-11-20 11:30:51.073756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.599 [2024-11-20 11:30:51.073769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.599 [2024-11-20 11:30:51.073776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.599 [2024-11-20 11:30:51.073783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.599 [2024-11-20 11:30:51.073797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.599 qpair failed and we were unable to recover it. 00:29:58.599 [2024-11-20 11:30:51.083647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.599 [2024-11-20 11:30:51.083689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.599 [2024-11-20 11:30:51.083702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.599 [2024-11-20 11:30:51.083709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.599 [2024-11-20 11:30:51.083716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.599 [2024-11-20 11:30:51.083730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.599 qpair failed and we were unable to recover it. 00:29:58.599 [2024-11-20 11:30:51.093710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.599 [2024-11-20 11:30:51.093768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.599 [2024-11-20 11:30:51.093781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.599 [2024-11-20 11:30:51.093788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.599 [2024-11-20 11:30:51.093794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.599 [2024-11-20 11:30:51.093809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.599 qpair failed and we were unable to recover it. 00:29:58.599 [2024-11-20 11:30:51.103752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.599 [2024-11-20 11:30:51.103804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.599 [2024-11-20 11:30:51.103817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.599 [2024-11-20 11:30:51.103824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.599 [2024-11-20 11:30:51.103831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.599 [2024-11-20 11:30:51.103845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.599 qpair failed and we were unable to recover it. 00:29:58.599 [2024-11-20 11:30:51.113775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.599 [2024-11-20 11:30:51.113827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.599 [2024-11-20 11:30:51.113840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.599 [2024-11-20 11:30:51.113847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.599 [2024-11-20 11:30:51.113854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.599 [2024-11-20 11:30:51.113868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.599 qpair failed and we were unable to recover it. 00:29:58.599 [2024-11-20 11:30:51.123737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.599 [2024-11-20 11:30:51.123783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.599 [2024-11-20 11:30:51.123800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.599 [2024-11-20 11:30:51.123807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.599 [2024-11-20 11:30:51.123813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.599 [2024-11-20 11:30:51.123827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.599 qpair failed and we were unable to recover it. 00:29:58.599 [2024-11-20 11:30:51.133815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.599 [2024-11-20 11:30:51.133872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.599 [2024-11-20 11:30:51.133885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.599 [2024-11-20 11:30:51.133892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.599 [2024-11-20 11:30:51.133899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.599 [2024-11-20 11:30:51.133912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.599 qpair failed and we were unable to recover it. 00:29:58.599 [2024-11-20 11:30:51.143857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.599 [2024-11-20 11:30:51.143918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.599 [2024-11-20 11:30:51.143932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.599 [2024-11-20 11:30:51.143939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.600 [2024-11-20 11:30:51.143946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.600 [2024-11-20 11:30:51.143959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.600 qpair failed and we were unable to recover it. 00:29:58.600 [2024-11-20 11:30:51.153879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.600 [2024-11-20 11:30:51.153940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.600 [2024-11-20 11:30:51.153964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.600 [2024-11-20 11:30:51.153973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.600 [2024-11-20 11:30:51.153980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.600 [2024-11-20 11:30:51.154000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.600 qpair failed and we were unable to recover it. 00:29:58.600 [2024-11-20 11:30:51.163860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.600 [2024-11-20 11:30:51.163916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.600 [2024-11-20 11:30:51.163939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.600 [2024-11-20 11:30:51.163948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.600 [2024-11-20 11:30:51.163965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.600 [2024-11-20 11:30:51.163986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.600 qpair failed and we were unable to recover it. 00:29:58.600 [2024-11-20 11:30:51.173942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.600 [2024-11-20 11:30:51.174001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.600 [2024-11-20 11:30:51.174025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.600 [2024-11-20 11:30:51.174034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.600 [2024-11-20 11:30:51.174041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.600 [2024-11-20 11:30:51.174061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.600 qpair failed and we were unable to recover it. 00:29:58.600 [2024-11-20 11:30:51.183980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.600 [2024-11-20 11:30:51.184037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.600 [2024-11-20 11:30:51.184052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.600 [2024-11-20 11:30:51.184059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.600 [2024-11-20 11:30:51.184066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.600 [2024-11-20 11:30:51.184082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.600 qpair failed and we were unable to recover it. 00:29:58.600 [2024-11-20 11:30:51.194002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.600 [2024-11-20 11:30:51.194105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.600 [2024-11-20 11:30:51.194119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.600 [2024-11-20 11:30:51.194126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.600 [2024-11-20 11:30:51.194133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.600 [2024-11-20 11:30:51.194149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.600 qpair failed and we were unable to recover it. 00:29:58.600 [2024-11-20 11:30:51.203963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.600 [2024-11-20 11:30:51.204010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.600 [2024-11-20 11:30:51.204023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.600 [2024-11-20 11:30:51.204031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.600 [2024-11-20 11:30:51.204037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.600 [2024-11-20 11:30:51.204052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.600 qpair failed and we were unable to recover it. 00:29:58.600 [2024-11-20 11:30:51.213983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.600 [2024-11-20 11:30:51.214036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.600 [2024-11-20 11:30:51.214049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.600 [2024-11-20 11:30:51.214057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.600 [2024-11-20 11:30:51.214064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.600 [2024-11-20 11:30:51.214078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.600 qpair failed and we were unable to recover it. 00:29:58.600 [2024-11-20 11:30:51.224079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.600 [2024-11-20 11:30:51.224136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.600 [2024-11-20 11:30:51.224149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.600 [2024-11-20 11:30:51.224157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.600 [2024-11-20 11:30:51.224168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.600 [2024-11-20 11:30:51.224182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.600 qpair failed and we were unable to recover it. 00:29:58.600 [2024-11-20 11:30:51.234103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.600 [2024-11-20 11:30:51.234155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.600 [2024-11-20 11:30:51.234171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.600 [2024-11-20 11:30:51.234178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.600 [2024-11-20 11:30:51.234185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.600 [2024-11-20 11:30:51.234199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.600 qpair failed and we were unable to recover it. 00:29:58.600 [2024-11-20 11:30:51.244096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.600 [2024-11-20 11:30:51.244149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.600 [2024-11-20 11:30:51.244165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.600 [2024-11-20 11:30:51.244173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.600 [2024-11-20 11:30:51.244179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.600 [2024-11-20 11:30:51.244193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.600 qpair failed and we were unable to recover it. 00:29:58.600 [2024-11-20 11:30:51.254157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.600 [2024-11-20 11:30:51.254214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.600 [2024-11-20 11:30:51.254231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.600 [2024-11-20 11:30:51.254238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.600 [2024-11-20 11:30:51.254245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.600 [2024-11-20 11:30:51.254259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.600 qpair failed and we were unable to recover it. 00:29:58.600 [2024-11-20 11:30:51.264186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.600 [2024-11-20 11:30:51.264241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.600 [2024-11-20 11:30:51.264254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.601 [2024-11-20 11:30:51.264261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.601 [2024-11-20 11:30:51.264267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.601 [2024-11-20 11:30:51.264282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.601 qpair failed and we were unable to recover it. 00:29:58.601 [2024-11-20 11:30:51.274197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.601 [2024-11-20 11:30:51.274250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.601 [2024-11-20 11:30:51.274262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.601 [2024-11-20 11:30:51.274270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.601 [2024-11-20 11:30:51.274276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.601 [2024-11-20 11:30:51.274291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.601 qpair failed and we were unable to recover it. 00:29:58.601 [2024-11-20 11:30:51.284204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.601 [2024-11-20 11:30:51.284258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.601 [2024-11-20 11:30:51.284271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.601 [2024-11-20 11:30:51.284278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.601 [2024-11-20 11:30:51.284285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.601 [2024-11-20 11:30:51.284299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.601 qpair failed and we were unable to recover it. 00:29:58.601 [2024-11-20 11:30:51.294297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.601 [2024-11-20 11:30:51.294386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.601 [2024-11-20 11:30:51.294399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.601 [2024-11-20 11:30:51.294406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.601 [2024-11-20 11:30:51.294416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.601 [2024-11-20 11:30:51.294431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.601 qpair failed and we were unable to recover it. 00:29:58.601 [2024-11-20 11:30:51.304316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.601 [2024-11-20 11:30:51.304372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.601 [2024-11-20 11:30:51.304385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.601 [2024-11-20 11:30:51.304392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.601 [2024-11-20 11:30:51.304399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.601 [2024-11-20 11:30:51.304413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.601 qpair failed and we were unable to recover it. 00:29:58.601 [2024-11-20 11:30:51.314333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.601 [2024-11-20 11:30:51.314379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.601 [2024-11-20 11:30:51.314392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.601 [2024-11-20 11:30:51.314399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.601 [2024-11-20 11:30:51.314406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.601 [2024-11-20 11:30:51.314420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.601 qpair failed and we were unable to recover it. 00:29:58.601 [2024-11-20 11:30:51.324317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.601 [2024-11-20 11:30:51.324369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.601 [2024-11-20 11:30:51.324383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.601 [2024-11-20 11:30:51.324390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.601 [2024-11-20 11:30:51.324397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.601 [2024-11-20 11:30:51.324415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.601 qpair failed and we were unable to recover it. 00:29:58.601 [2024-11-20 11:30:51.334419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.601 [2024-11-20 11:30:51.334476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.601 [2024-11-20 11:30:51.334490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.601 [2024-11-20 11:30:51.334497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.601 [2024-11-20 11:30:51.334504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.601 [2024-11-20 11:30:51.334518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.601 qpair failed and we were unable to recover it. 00:29:58.866 [2024-11-20 11:30:51.344429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.866 [2024-11-20 11:30:51.344481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.866 [2024-11-20 11:30:51.344494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.866 [2024-11-20 11:30:51.344502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.866 [2024-11-20 11:30:51.344509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.866 [2024-11-20 11:30:51.344523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.866 qpair failed and we were unable to recover it. 00:29:58.866 [2024-11-20 11:30:51.354359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.866 [2024-11-20 11:30:51.354414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.866 [2024-11-20 11:30:51.354428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.866 [2024-11-20 11:30:51.354435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.866 [2024-11-20 11:30:51.354442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.866 [2024-11-20 11:30:51.354457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.866 qpair failed and we were unable to recover it. 00:29:58.866 [2024-11-20 11:30:51.364453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.866 [2024-11-20 11:30:51.364502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.866 [2024-11-20 11:30:51.364515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.866 [2024-11-20 11:30:51.364523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.866 [2024-11-20 11:30:51.364529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.866 [2024-11-20 11:30:51.364544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.866 qpair failed and we were unable to recover it. 00:29:58.866 [2024-11-20 11:30:51.374535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.866 [2024-11-20 11:30:51.374591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.866 [2024-11-20 11:30:51.374604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.866 [2024-11-20 11:30:51.374612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.866 [2024-11-20 11:30:51.374618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.866 [2024-11-20 11:30:51.374633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.866 qpair failed and we were unable to recover it. 00:29:58.866 [2024-11-20 11:30:51.384555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.866 [2024-11-20 11:30:51.384640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.866 [2024-11-20 11:30:51.384653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.866 [2024-11-20 11:30:51.384661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.866 [2024-11-20 11:30:51.384668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.866 [2024-11-20 11:30:51.384682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.866 qpair failed and we were unable to recover it. 00:29:58.866 [2024-11-20 11:30:51.394554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.866 [2024-11-20 11:30:51.394603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.866 [2024-11-20 11:30:51.394616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.866 [2024-11-20 11:30:51.394623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.866 [2024-11-20 11:30:51.394629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.866 [2024-11-20 11:30:51.394643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.866 qpair failed and we were unable to recover it. 00:29:58.866 [2024-11-20 11:30:51.404579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.866 [2024-11-20 11:30:51.404629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.866 [2024-11-20 11:30:51.404642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.866 [2024-11-20 11:30:51.404649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.866 [2024-11-20 11:30:51.404656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.866 [2024-11-20 11:30:51.404670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.866 qpair failed and we were unable to recover it. 00:29:58.866 [2024-11-20 11:30:51.414643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.866 [2024-11-20 11:30:51.414699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.866 [2024-11-20 11:30:51.414712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.866 [2024-11-20 11:30:51.414720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.866 [2024-11-20 11:30:51.414726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.866 [2024-11-20 11:30:51.414740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.866 qpair failed and we were unable to recover it. 00:29:58.866 [2024-11-20 11:30:51.424679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.866 [2024-11-20 11:30:51.424758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.866 [2024-11-20 11:30:51.424770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.866 [2024-11-20 11:30:51.424781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.866 [2024-11-20 11:30:51.424788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.866 [2024-11-20 11:30:51.424802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.866 qpair failed and we were unable to recover it. 00:29:58.866 [2024-11-20 11:30:51.434700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.866 [2024-11-20 11:30:51.434752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.867 [2024-11-20 11:30:51.434765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.867 [2024-11-20 11:30:51.434773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.867 [2024-11-20 11:30:51.434779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.867 [2024-11-20 11:30:51.434793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.867 qpair failed and we were unable to recover it. 00:29:58.867 [2024-11-20 11:30:51.444688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.867 [2024-11-20 11:30:51.444740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.867 [2024-11-20 11:30:51.444753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.867 [2024-11-20 11:30:51.444760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.867 [2024-11-20 11:30:51.444767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.867 [2024-11-20 11:30:51.444781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.867 qpair failed and we were unable to recover it. 00:29:58.867 [2024-11-20 11:30:51.454745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.867 [2024-11-20 11:30:51.454801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.867 [2024-11-20 11:30:51.454814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.867 [2024-11-20 11:30:51.454821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.867 [2024-11-20 11:30:51.454828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.867 [2024-11-20 11:30:51.454842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.867 qpair failed and we were unable to recover it. 00:29:58.867 [2024-11-20 11:30:51.464676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.867 [2024-11-20 11:30:51.464738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.867 [2024-11-20 11:30:51.464751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.867 [2024-11-20 11:30:51.464758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.867 [2024-11-20 11:30:51.464765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.867 [2024-11-20 11:30:51.464782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.867 qpair failed and we were unable to recover it. 00:29:58.867 [2024-11-20 11:30:51.474811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.867 [2024-11-20 11:30:51.474864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.867 [2024-11-20 11:30:51.474877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.867 [2024-11-20 11:30:51.474884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.867 [2024-11-20 11:30:51.474891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.867 [2024-11-20 11:30:51.474905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.867 qpair failed and we were unable to recover it. 00:29:58.867 [2024-11-20 11:30:51.484784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.867 [2024-11-20 11:30:51.484833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.867 [2024-11-20 11:30:51.484846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.867 [2024-11-20 11:30:51.484853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.867 [2024-11-20 11:30:51.484859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.867 [2024-11-20 11:30:51.484874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.867 qpair failed and we were unable to recover it. 00:29:58.867 [2024-11-20 11:30:51.494746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.867 [2024-11-20 11:30:51.494805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.867 [2024-11-20 11:30:51.494818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.867 [2024-11-20 11:30:51.494825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.867 [2024-11-20 11:30:51.494832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.867 [2024-11-20 11:30:51.494846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.867 qpair failed and we were unable to recover it. 00:29:58.867 [2024-11-20 11:30:51.504900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.867 [2024-11-20 11:30:51.504984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.867 [2024-11-20 11:30:51.504997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.867 [2024-11-20 11:30:51.505004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.867 [2024-11-20 11:30:51.505011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.867 [2024-11-20 11:30:51.505024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.867 qpair failed and we were unable to recover it. 00:29:58.867 [2024-11-20 11:30:51.514953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.867 [2024-11-20 11:30:51.515036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.867 [2024-11-20 11:30:51.515049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.867 [2024-11-20 11:30:51.515056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.867 [2024-11-20 11:30:51.515063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.867 [2024-11-20 11:30:51.515077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.867 qpair failed and we were unable to recover it. 00:29:58.867 [2024-11-20 11:30:51.524918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.867 [2024-11-20 11:30:51.524966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.867 [2024-11-20 11:30:51.524979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.867 [2024-11-20 11:30:51.524986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.867 [2024-11-20 11:30:51.524993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.867 [2024-11-20 11:30:51.525007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.867 qpair failed and we were unable to recover it. 00:29:58.867 [2024-11-20 11:30:51.534972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.867 [2024-11-20 11:30:51.535028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.867 [2024-11-20 11:30:51.535041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.867 [2024-11-20 11:30:51.535048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.867 [2024-11-20 11:30:51.535055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.867 [2024-11-20 11:30:51.535069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.867 qpair failed and we were unable to recover it. 00:29:58.867 [2024-11-20 11:30:51.545014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.868 [2024-11-20 11:30:51.545068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.868 [2024-11-20 11:30:51.545081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.868 [2024-11-20 11:30:51.545088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.868 [2024-11-20 11:30:51.545095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.868 [2024-11-20 11:30:51.545109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.868 qpair failed and we were unable to recover it. 00:29:58.868 [2024-11-20 11:30:51.555047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.868 [2024-11-20 11:30:51.555131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.868 [2024-11-20 11:30:51.555147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.868 [2024-11-20 11:30:51.555155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.868 [2024-11-20 11:30:51.555166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.868 [2024-11-20 11:30:51.555181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.868 qpair failed and we were unable to recover it. 00:29:58.868 [2024-11-20 11:30:51.564971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.868 [2024-11-20 11:30:51.565022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.868 [2024-11-20 11:30:51.565035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.868 [2024-11-20 11:30:51.565043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.868 [2024-11-20 11:30:51.565049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.868 [2024-11-20 11:30:51.565063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.868 qpair failed and we were unable to recover it. 00:29:58.868 [2024-11-20 11:30:51.575080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.868 [2024-11-20 11:30:51.575137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.868 [2024-11-20 11:30:51.575150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.868 [2024-11-20 11:30:51.575157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.868 [2024-11-20 11:30:51.575167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.868 [2024-11-20 11:30:51.575182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.868 qpair failed and we were unable to recover it. 00:29:58.868 [2024-11-20 11:30:51.585121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.868 [2024-11-20 11:30:51.585176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.868 [2024-11-20 11:30:51.585190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.868 [2024-11-20 11:30:51.585197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.868 [2024-11-20 11:30:51.585204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.868 [2024-11-20 11:30:51.585218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.868 qpair failed and we were unable to recover it. 00:29:58.868 [2024-11-20 11:30:51.595041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.868 [2024-11-20 11:30:51.595139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.868 [2024-11-20 11:30:51.595153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.868 [2024-11-20 11:30:51.595165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.868 [2024-11-20 11:30:51.595172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:58.868 [2024-11-20 11:30:51.595189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.868 qpair failed and we were unable to recover it. 00:29:59.130 [2024-11-20 11:30:51.605144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.130 [2024-11-20 11:30:51.605241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.130 [2024-11-20 11:30:51.605254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.130 [2024-11-20 11:30:51.605261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.130 [2024-11-20 11:30:51.605269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.130 [2024-11-20 11:30:51.605283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.130 qpair failed and we were unable to recover it. 00:29:59.130 [2024-11-20 11:30:51.615194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.130 [2024-11-20 11:30:51.615252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.130 [2024-11-20 11:30:51.615265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.130 [2024-11-20 11:30:51.615272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.130 [2024-11-20 11:30:51.615279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.130 [2024-11-20 11:30:51.615293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.130 qpair failed and we were unable to recover it. 00:29:59.130 [2024-11-20 11:30:51.625247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.130 [2024-11-20 11:30:51.625313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.130 [2024-11-20 11:30:51.625328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.130 [2024-11-20 11:30:51.625336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.130 [2024-11-20 11:30:51.625345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.130 [2024-11-20 11:30:51.625361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.130 qpair failed and we were unable to recover it. 00:29:59.130 [2024-11-20 11:30:51.635247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.130 [2024-11-20 11:30:51.635302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.130 [2024-11-20 11:30:51.635315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.130 [2024-11-20 11:30:51.635322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.130 [2024-11-20 11:30:51.635329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.130 [2024-11-20 11:30:51.635344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.130 qpair failed and we were unable to recover it. 00:29:59.130 [2024-11-20 11:30:51.645227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.130 [2024-11-20 11:30:51.645280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.130 [2024-11-20 11:30:51.645293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.130 [2024-11-20 11:30:51.645300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.130 [2024-11-20 11:30:51.645307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.130 [2024-11-20 11:30:51.645321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.130 qpair failed and we were unable to recover it. 00:29:59.130 [2024-11-20 11:30:51.655320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.130 [2024-11-20 11:30:51.655375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.130 [2024-11-20 11:30:51.655388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.130 [2024-11-20 11:30:51.655395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.130 [2024-11-20 11:30:51.655401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.130 [2024-11-20 11:30:51.655415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.130 qpair failed and we were unable to recover it. 00:29:59.130 [2024-11-20 11:30:51.665338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.130 [2024-11-20 11:30:51.665393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.130 [2024-11-20 11:30:51.665406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.130 [2024-11-20 11:30:51.665413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.130 [2024-11-20 11:30:51.665420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.130 [2024-11-20 11:30:51.665434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.130 qpair failed and we were unable to recover it. 00:29:59.130 [2024-11-20 11:30:51.675355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.130 [2024-11-20 11:30:51.675412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.130 [2024-11-20 11:30:51.675425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.130 [2024-11-20 11:30:51.675432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.130 [2024-11-20 11:30:51.675439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.130 [2024-11-20 11:30:51.675453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.130 qpair failed and we were unable to recover it. 00:29:59.130 [2024-11-20 11:30:51.685349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.130 [2024-11-20 11:30:51.685401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.130 [2024-11-20 11:30:51.685418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.130 [2024-11-20 11:30:51.685425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.131 [2024-11-20 11:30:51.685432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.131 [2024-11-20 11:30:51.685446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.131 qpair failed and we were unable to recover it. 00:29:59.131 [2024-11-20 11:30:51.695415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.131 [2024-11-20 11:30:51.695477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.131 [2024-11-20 11:30:51.695499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.131 [2024-11-20 11:30:51.695507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.131 [2024-11-20 11:30:51.695514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.131 [2024-11-20 11:30:51.695531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.131 qpair failed and we were unable to recover it. 00:29:59.131 [2024-11-20 11:30:51.705490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.131 [2024-11-20 11:30:51.705548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.131 [2024-11-20 11:30:51.705562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.131 [2024-11-20 11:30:51.705569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.131 [2024-11-20 11:30:51.705576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.131 [2024-11-20 11:30:51.705590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.131 qpair failed and we were unable to recover it. 00:29:59.131 [2024-11-20 11:30:51.715475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.131 [2024-11-20 11:30:51.715527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.131 [2024-11-20 11:30:51.715540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.131 [2024-11-20 11:30:51.715548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.131 [2024-11-20 11:30:51.715554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.131 [2024-11-20 11:30:51.715569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.131 qpair failed and we were unable to recover it. 00:29:59.131 [2024-11-20 11:30:51.725477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.131 [2024-11-20 11:30:51.725525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.131 [2024-11-20 11:30:51.725538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.131 [2024-11-20 11:30:51.725546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.131 [2024-11-20 11:30:51.725556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.131 [2024-11-20 11:30:51.725570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.131 qpair failed and we were unable to recover it. 00:29:59.131 [2024-11-20 11:30:51.735556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.131 [2024-11-20 11:30:51.735631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.131 [2024-11-20 11:30:51.735644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.131 [2024-11-20 11:30:51.735651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.131 [2024-11-20 11:30:51.735658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.131 [2024-11-20 11:30:51.735673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.131 qpair failed and we were unable to recover it. 00:29:59.131 [2024-11-20 11:30:51.745544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.131 [2024-11-20 11:30:51.745599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.131 [2024-11-20 11:30:51.745612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.131 [2024-11-20 11:30:51.745619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.131 [2024-11-20 11:30:51.745625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.131 [2024-11-20 11:30:51.745640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.131 qpair failed and we were unable to recover it. 00:29:59.131 [2024-11-20 11:30:51.755546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.131 [2024-11-20 11:30:51.755598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.131 [2024-11-20 11:30:51.755611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.131 [2024-11-20 11:30:51.755618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.131 [2024-11-20 11:30:51.755625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.131 [2024-11-20 11:30:51.755639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.131 qpair failed and we were unable to recover it. 00:29:59.131 [2024-11-20 11:30:51.765575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.131 [2024-11-20 11:30:51.765630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.131 [2024-11-20 11:30:51.765644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.131 [2024-11-20 11:30:51.765651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.131 [2024-11-20 11:30:51.765658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.131 [2024-11-20 11:30:51.765674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.131 qpair failed and we were unable to recover it. 00:29:59.131 [2024-11-20 11:30:51.775609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.131 [2024-11-20 11:30:51.775665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.131 [2024-11-20 11:30:51.775678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.131 [2024-11-20 11:30:51.775685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.131 [2024-11-20 11:30:51.775692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.131 [2024-11-20 11:30:51.775706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.131 qpair failed and we were unable to recover it. 00:29:59.131 [2024-11-20 11:30:51.785681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.131 [2024-11-20 11:30:51.785734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.131 [2024-11-20 11:30:51.785747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.131 [2024-11-20 11:30:51.785754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.131 [2024-11-20 11:30:51.785761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.131 [2024-11-20 11:30:51.785775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.131 qpair failed and we were unable to recover it. 00:29:59.131 [2024-11-20 11:30:51.795568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.131 [2024-11-20 11:30:51.795623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.131 [2024-11-20 11:30:51.795637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.131 [2024-11-20 11:30:51.795644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.131 [2024-11-20 11:30:51.795651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.131 [2024-11-20 11:30:51.795665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.131 qpair failed and we were unable to recover it. 00:29:59.131 [2024-11-20 11:30:51.805649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.131 [2024-11-20 11:30:51.805699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.131 [2024-11-20 11:30:51.805712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.131 [2024-11-20 11:30:51.805719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.131 [2024-11-20 11:30:51.805726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.131 [2024-11-20 11:30:51.805740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.131 qpair failed and we were unable to recover it. 00:29:59.131 [2024-11-20 11:30:51.815757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.131 [2024-11-20 11:30:51.815812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.132 [2024-11-20 11:30:51.815828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.132 [2024-11-20 11:30:51.815835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.132 [2024-11-20 11:30:51.815842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.132 [2024-11-20 11:30:51.815856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.132 qpair failed and we were unable to recover it. 00:29:59.132 [2024-11-20 11:30:51.825805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.132 [2024-11-20 11:30:51.825860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.132 [2024-11-20 11:30:51.825873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.132 [2024-11-20 11:30:51.825880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.132 [2024-11-20 11:30:51.825887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.132 [2024-11-20 11:30:51.825901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.132 qpair failed and we were unable to recover it. 00:29:59.132 [2024-11-20 11:30:51.835803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.132 [2024-11-20 11:30:51.835858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.132 [2024-11-20 11:30:51.835871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.132 [2024-11-20 11:30:51.835878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.132 [2024-11-20 11:30:51.835884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.132 [2024-11-20 11:30:51.835898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.132 qpair failed and we were unable to recover it. 00:29:59.132 [2024-11-20 11:30:51.845790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.132 [2024-11-20 11:30:51.845853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.132 [2024-11-20 11:30:51.845866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.132 [2024-11-20 11:30:51.845873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.132 [2024-11-20 11:30:51.845880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.132 [2024-11-20 11:30:51.845894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.132 qpair failed and we were unable to recover it. 00:29:59.132 [2024-11-20 11:30:51.855876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.132 [2024-11-20 11:30:51.855933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.132 [2024-11-20 11:30:51.855946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.132 [2024-11-20 11:30:51.855957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.132 [2024-11-20 11:30:51.855964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.132 [2024-11-20 11:30:51.855979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.132 qpair failed and we were unable to recover it. 00:29:59.132 [2024-11-20 11:30:51.865902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.132 [2024-11-20 11:30:51.865959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.132 [2024-11-20 11:30:51.865973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.132 [2024-11-20 11:30:51.865980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.132 [2024-11-20 11:30:51.865986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.132 [2024-11-20 11:30:51.866000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.132 qpair failed and we were unable to recover it. 00:29:59.394 [2024-11-20 11:30:51.875909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.394 [2024-11-20 11:30:51.875960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.394 [2024-11-20 11:30:51.875974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.394 [2024-11-20 11:30:51.875981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.394 [2024-11-20 11:30:51.875988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.394 [2024-11-20 11:30:51.876001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.394 qpair failed and we were unable to recover it. 00:29:59.394 [2024-11-20 11:30:51.885896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.394 [2024-11-20 11:30:51.885942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.394 [2024-11-20 11:30:51.885956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.394 [2024-11-20 11:30:51.885964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.394 [2024-11-20 11:30:51.885970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.394 [2024-11-20 11:30:51.885984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.394 qpair failed and we were unable to recover it. 00:29:59.394 [2024-11-20 11:30:51.895982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.394 [2024-11-20 11:30:51.896043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.394 [2024-11-20 11:30:51.896056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.394 [2024-11-20 11:30:51.896063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.394 [2024-11-20 11:30:51.896070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.394 [2024-11-20 11:30:51.896084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.394 qpair failed and we were unable to recover it. 00:29:59.394 [2024-11-20 11:30:51.906000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.394 [2024-11-20 11:30:51.906056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.394 [2024-11-20 11:30:51.906068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.394 [2024-11-20 11:30:51.906076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.394 [2024-11-20 11:30:51.906082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.394 [2024-11-20 11:30:51.906096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.394 qpair failed and we were unable to recover it. 00:29:59.394 [2024-11-20 11:30:51.916021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.394 [2024-11-20 11:30:51.916069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.394 [2024-11-20 11:30:51.916082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.394 [2024-11-20 11:30:51.916089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.394 [2024-11-20 11:30:51.916096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.394 [2024-11-20 11:30:51.916110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.394 qpair failed and we were unable to recover it. 00:29:59.394 [2024-11-20 11:30:51.926019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.394 [2024-11-20 11:30:51.926063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.394 [2024-11-20 11:30:51.926076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.394 [2024-11-20 11:30:51.926084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.394 [2024-11-20 11:30:51.926090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.394 [2024-11-20 11:30:51.926104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.394 qpair failed and we were unable to recover it. 00:29:59.394 [2024-11-20 11:30:51.936090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.394 [2024-11-20 11:30:51.936147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.394 [2024-11-20 11:30:51.936164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.394 [2024-11-20 11:30:51.936172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.394 [2024-11-20 11:30:51.936178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.394 [2024-11-20 11:30:51.936193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.394 qpair failed and we were unable to recover it. 00:29:59.394 [2024-11-20 11:30:51.946094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.394 [2024-11-20 11:30:51.946150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.394 [2024-11-20 11:30:51.946168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.394 [2024-11-20 11:30:51.946175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.394 [2024-11-20 11:30:51.946182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.394 [2024-11-20 11:30:51.946196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.394 qpair failed and we were unable to recover it. 00:29:59.394 [2024-11-20 11:30:51.956110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.394 [2024-11-20 11:30:51.956164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.394 [2024-11-20 11:30:51.956176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.394 [2024-11-20 11:30:51.956184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.394 [2024-11-20 11:30:51.956190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.394 [2024-11-20 11:30:51.956204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.394 qpair failed and we were unable to recover it. 00:29:59.394 [2024-11-20 11:30:51.966129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.395 [2024-11-20 11:30:51.966180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.395 [2024-11-20 11:30:51.966193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.395 [2024-11-20 11:30:51.966200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.395 [2024-11-20 11:30:51.966207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.395 [2024-11-20 11:30:51.966221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.395 qpair failed and we were unable to recover it. 00:29:59.395 [2024-11-20 11:30:51.976119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.395 [2024-11-20 11:30:51.976212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.395 [2024-11-20 11:30:51.976225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.395 [2024-11-20 11:30:51.976233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.395 [2024-11-20 11:30:51.976240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.395 [2024-11-20 11:30:51.976254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.395 qpair failed and we were unable to recover it. 00:29:59.395 [2024-11-20 11:30:51.986232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.395 [2024-11-20 11:30:51.986291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.395 [2024-11-20 11:30:51.986304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.395 [2024-11-20 11:30:51.986314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.395 [2024-11-20 11:30:51.986321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.395 [2024-11-20 11:30:51.986336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.395 qpair failed and we were unable to recover it. 00:29:59.395 [2024-11-20 11:30:51.996261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.395 [2024-11-20 11:30:51.996311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.395 [2024-11-20 11:30:51.996324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.395 [2024-11-20 11:30:51.996331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.395 [2024-11-20 11:30:51.996338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.395 [2024-11-20 11:30:51.996352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.395 qpair failed and we were unable to recover it. 00:29:59.395 [2024-11-20 11:30:52.006194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.395 [2024-11-20 11:30:52.006244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.395 [2024-11-20 11:30:52.006257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.395 [2024-11-20 11:30:52.006265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.395 [2024-11-20 11:30:52.006272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.395 [2024-11-20 11:30:52.006286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.395 qpair failed and we were unable to recover it. 00:29:59.395 [2024-11-20 11:30:52.016313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.395 [2024-11-20 11:30:52.016369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.395 [2024-11-20 11:30:52.016383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.395 [2024-11-20 11:30:52.016390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.395 [2024-11-20 11:30:52.016397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.395 [2024-11-20 11:30:52.016411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.395 qpair failed and we were unable to recover it. 00:29:59.395 [2024-11-20 11:30:52.026336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.395 [2024-11-20 11:30:52.026389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.395 [2024-11-20 11:30:52.026402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.395 [2024-11-20 11:30:52.026409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.395 [2024-11-20 11:30:52.026416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.395 [2024-11-20 11:30:52.026434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.395 qpair failed and we were unable to recover it. 00:29:59.395 [2024-11-20 11:30:52.036384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.395 [2024-11-20 11:30:52.036432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.395 [2024-11-20 11:30:52.036445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.395 [2024-11-20 11:30:52.036452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.395 [2024-11-20 11:30:52.036459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.395 [2024-11-20 11:30:52.036473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.395 qpair failed and we were unable to recover it. 00:29:59.395 [2024-11-20 11:30:52.046408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.395 [2024-11-20 11:30:52.046454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.395 [2024-11-20 11:30:52.046467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.395 [2024-11-20 11:30:52.046475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.395 [2024-11-20 11:30:52.046482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.395 [2024-11-20 11:30:52.046496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.395 qpair failed and we were unable to recover it. 00:29:59.395 [2024-11-20 11:30:52.056464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.395 [2024-11-20 11:30:52.056555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.395 [2024-11-20 11:30:52.056569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.395 [2024-11-20 11:30:52.056577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.395 [2024-11-20 11:30:52.056584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.395 [2024-11-20 11:30:52.056597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.395 qpair failed and we were unable to recover it. 00:29:59.395 [2024-11-20 11:30:52.066483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.395 [2024-11-20 11:30:52.066534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.395 [2024-11-20 11:30:52.066547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.395 [2024-11-20 11:30:52.066554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.395 [2024-11-20 11:30:52.066561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.395 [2024-11-20 11:30:52.066574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.395 qpair failed and we were unable to recover it. 00:29:59.395 [2024-11-20 11:30:52.076484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.395 [2024-11-20 11:30:52.076537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.395 [2024-11-20 11:30:52.076550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.395 [2024-11-20 11:30:52.076557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.395 [2024-11-20 11:30:52.076564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.395 [2024-11-20 11:30:52.076577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.395 qpair failed and we were unable to recover it. 00:29:59.395 [2024-11-20 11:30:52.086488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.395 [2024-11-20 11:30:52.086584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.395 [2024-11-20 11:30:52.086598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.395 [2024-11-20 11:30:52.086605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.395 [2024-11-20 11:30:52.086612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.395 [2024-11-20 11:30:52.086626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.395 qpair failed and we were unable to recover it. 00:29:59.395 [2024-11-20 11:30:52.096549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.395 [2024-11-20 11:30:52.096600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.396 [2024-11-20 11:30:52.096613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.396 [2024-11-20 11:30:52.096621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.396 [2024-11-20 11:30:52.096627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.396 [2024-11-20 11:30:52.096642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.396 qpair failed and we were unable to recover it. 00:29:59.396 [2024-11-20 11:30:52.106463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.396 [2024-11-20 11:30:52.106522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.396 [2024-11-20 11:30:52.106536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.396 [2024-11-20 11:30:52.106543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.396 [2024-11-20 11:30:52.106550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.396 [2024-11-20 11:30:52.106565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.396 qpair failed and we were unable to recover it. 00:29:59.396 [2024-11-20 11:30:52.116595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.396 [2024-11-20 11:30:52.116645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.396 [2024-11-20 11:30:52.116661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.396 [2024-11-20 11:30:52.116669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.396 [2024-11-20 11:30:52.116676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.396 [2024-11-20 11:30:52.116690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.396 qpair failed and we were unable to recover it. 00:29:59.396 [2024-11-20 11:30:52.126553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.396 [2024-11-20 11:30:52.126603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.396 [2024-11-20 11:30:52.126616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.396 [2024-11-20 11:30:52.126623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.396 [2024-11-20 11:30:52.126629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.396 [2024-11-20 11:30:52.126644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.396 qpair failed and we were unable to recover it. 00:29:59.659 [2024-11-20 11:30:52.136665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.659 [2024-11-20 11:30:52.136719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.659 [2024-11-20 11:30:52.136732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.659 [2024-11-20 11:30:52.136739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.659 [2024-11-20 11:30:52.136746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.659 [2024-11-20 11:30:52.136760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.659 qpair failed and we were unable to recover it. 00:29:59.659 [2024-11-20 11:30:52.146703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.659 [2024-11-20 11:30:52.146782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.659 [2024-11-20 11:30:52.146794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.659 [2024-11-20 11:30:52.146802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.659 [2024-11-20 11:30:52.146810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.659 [2024-11-20 11:30:52.146823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.659 qpair failed and we were unable to recover it. 00:29:59.659 [2024-11-20 11:30:52.156707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.659 [2024-11-20 11:30:52.156802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.659 [2024-11-20 11:30:52.156816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.659 [2024-11-20 11:30:52.156823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.659 [2024-11-20 11:30:52.156830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.659 [2024-11-20 11:30:52.156850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.659 qpair failed and we were unable to recover it. 00:29:59.659 [2024-11-20 11:30:52.166705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.659 [2024-11-20 11:30:52.166757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.659 [2024-11-20 11:30:52.166770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.659 [2024-11-20 11:30:52.166777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.659 [2024-11-20 11:30:52.166784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.659 [2024-11-20 11:30:52.166798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.659 qpair failed and we were unable to recover it. 00:29:59.659 [2024-11-20 11:30:52.176764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.659 [2024-11-20 11:30:52.176818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.659 [2024-11-20 11:30:52.176831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.659 [2024-11-20 11:30:52.176838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.659 [2024-11-20 11:30:52.176845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.659 [2024-11-20 11:30:52.176859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.660 qpair failed and we were unable to recover it. 00:29:59.660 [2024-11-20 11:30:52.186818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.660 [2024-11-20 11:30:52.186876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.660 [2024-11-20 11:30:52.186890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.660 [2024-11-20 11:30:52.186897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.660 [2024-11-20 11:30:52.186904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.660 [2024-11-20 11:30:52.186918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.660 qpair failed and we were unable to recover it. 00:29:59.660 [2024-11-20 11:30:52.196824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.660 [2024-11-20 11:30:52.196876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.660 [2024-11-20 11:30:52.196889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.660 [2024-11-20 11:30:52.196897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.660 [2024-11-20 11:30:52.196903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.660 [2024-11-20 11:30:52.196918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.660 qpair failed and we were unable to recover it. 00:29:59.660 [2024-11-20 11:30:52.206828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.660 [2024-11-20 11:30:52.206872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.660 [2024-11-20 11:30:52.206885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.660 [2024-11-20 11:30:52.206892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.660 [2024-11-20 11:30:52.206899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.660 [2024-11-20 11:30:52.206913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.660 qpair failed and we were unable to recover it. 00:29:59.660 [2024-11-20 11:30:52.216884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.660 [2024-11-20 11:30:52.216945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.660 [2024-11-20 11:30:52.216958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.660 [2024-11-20 11:30:52.216965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.660 [2024-11-20 11:30:52.216972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.660 [2024-11-20 11:30:52.216986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.660 qpair failed and we were unable to recover it. 00:29:59.660 [2024-11-20 11:30:52.226916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.660 [2024-11-20 11:30:52.226975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.660 [2024-11-20 11:30:52.226999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.660 [2024-11-20 11:30:52.227008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.660 [2024-11-20 11:30:52.227016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.660 [2024-11-20 11:30:52.227037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.660 qpair failed and we were unable to recover it. 00:29:59.660 [2024-11-20 11:30:52.236927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.660 [2024-11-20 11:30:52.236983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.660 [2024-11-20 11:30:52.236997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.660 [2024-11-20 11:30:52.237005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.660 [2024-11-20 11:30:52.237012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.660 [2024-11-20 11:30:52.237027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.660 qpair failed and we were unable to recover it. 00:29:59.660 [2024-11-20 11:30:52.246810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.660 [2024-11-20 11:30:52.246865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.660 [2024-11-20 11:30:52.246885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.660 [2024-11-20 11:30:52.246893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.660 [2024-11-20 11:30:52.246900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.660 [2024-11-20 11:30:52.246915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.660 qpair failed and we were unable to recover it. 00:29:59.660 [2024-11-20 11:30:52.256973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.660 [2024-11-20 11:30:52.257052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.660 [2024-11-20 11:30:52.257066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.660 [2024-11-20 11:30:52.257074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.660 [2024-11-20 11:30:52.257081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.660 [2024-11-20 11:30:52.257096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.660 qpair failed and we were unable to recover it. 00:29:59.660 [2024-11-20 11:30:52.267030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.660 [2024-11-20 11:30:52.267085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.660 [2024-11-20 11:30:52.267098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.660 [2024-11-20 11:30:52.267105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.660 [2024-11-20 11:30:52.267112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.660 [2024-11-20 11:30:52.267126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.660 qpair failed and we were unable to recover it. 00:29:59.660 [2024-11-20 11:30:52.277042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.660 [2024-11-20 11:30:52.277094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.660 [2024-11-20 11:30:52.277108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.660 [2024-11-20 11:30:52.277115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.660 [2024-11-20 11:30:52.277122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.660 [2024-11-20 11:30:52.277136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.660 qpair failed and we were unable to recover it. 00:29:59.660 [2024-11-20 11:30:52.287046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.660 [2024-11-20 11:30:52.287096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.660 [2024-11-20 11:30:52.287109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.660 [2024-11-20 11:30:52.287116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.660 [2024-11-20 11:30:52.287126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.661 [2024-11-20 11:30:52.287141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.661 qpair failed and we were unable to recover it. 00:29:59.661 [2024-11-20 11:30:52.297122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.661 [2024-11-20 11:30:52.297181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.661 [2024-11-20 11:30:52.297195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.661 [2024-11-20 11:30:52.297202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.661 [2024-11-20 11:30:52.297209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.661 [2024-11-20 11:30:52.297223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.661 qpair failed and we were unable to recover it. 00:29:59.661 [2024-11-20 11:30:52.307136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.661 [2024-11-20 11:30:52.307195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.661 [2024-11-20 11:30:52.307209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.661 [2024-11-20 11:30:52.307217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.661 [2024-11-20 11:30:52.307223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.661 [2024-11-20 11:30:52.307238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.661 qpair failed and we were unable to recover it. 00:29:59.661 [2024-11-20 11:30:52.317154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.661 [2024-11-20 11:30:52.317212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.661 [2024-11-20 11:30:52.317226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.661 [2024-11-20 11:30:52.317233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.661 [2024-11-20 11:30:52.317239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.661 [2024-11-20 11:30:52.317254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.661 qpair failed and we were unable to recover it. 00:29:59.661 [2024-11-20 11:30:52.327153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.661 [2024-11-20 11:30:52.327206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.661 [2024-11-20 11:30:52.327220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.661 [2024-11-20 11:30:52.327227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.661 [2024-11-20 11:30:52.327234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.661 [2024-11-20 11:30:52.327248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.661 qpair failed and we were unable to recover it. 00:29:59.661 [2024-11-20 11:30:52.337238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.661 [2024-11-20 11:30:52.337303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.661 [2024-11-20 11:30:52.337316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.661 [2024-11-20 11:30:52.337324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.661 [2024-11-20 11:30:52.337330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.661 [2024-11-20 11:30:52.337345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.661 qpair failed and we were unable to recover it. 00:29:59.661 [2024-11-20 11:30:52.347251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.661 [2024-11-20 11:30:52.347314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.661 [2024-11-20 11:30:52.347328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.661 [2024-11-20 11:30:52.347335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.661 [2024-11-20 11:30:52.347342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.661 [2024-11-20 11:30:52.347356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.661 qpair failed and we were unable to recover it. 00:29:59.661 [2024-11-20 11:30:52.357280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.661 [2024-11-20 11:30:52.357340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.661 [2024-11-20 11:30:52.357353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.661 [2024-11-20 11:30:52.357360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.661 [2024-11-20 11:30:52.357367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.661 [2024-11-20 11:30:52.357381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.661 qpair failed and we were unable to recover it. 00:29:59.661 [2024-11-20 11:30:52.367149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.661 [2024-11-20 11:30:52.367205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.661 [2024-11-20 11:30:52.367219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.661 [2024-11-20 11:30:52.367226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.661 [2024-11-20 11:30:52.367233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.661 [2024-11-20 11:30:52.367247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.661 qpair failed and we were unable to recover it. 00:29:59.661 [2024-11-20 11:30:52.377283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.661 [2024-11-20 11:30:52.377342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.661 [2024-11-20 11:30:52.377358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.661 [2024-11-20 11:30:52.377366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.661 [2024-11-20 11:30:52.377372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.661 [2024-11-20 11:30:52.377387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.661 qpair failed and we were unable to recover it. 00:29:59.661 [2024-11-20 11:30:52.387338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.661 [2024-11-20 11:30:52.387392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.661 [2024-11-20 11:30:52.387406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.661 [2024-11-20 11:30:52.387413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.661 [2024-11-20 11:30:52.387420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.661 [2024-11-20 11:30:52.387434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.661 qpair failed and we were unable to recover it. 00:29:59.922 [2024-11-20 11:30:52.397392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.922 [2024-11-20 11:30:52.397442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.922 [2024-11-20 11:30:52.397455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.922 [2024-11-20 11:30:52.397463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.922 [2024-11-20 11:30:52.397469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.922 [2024-11-20 11:30:52.397484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.922 qpair failed and we were unable to recover it. 00:29:59.922 [2024-11-20 11:30:52.407356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.922 [2024-11-20 11:30:52.407415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.922 [2024-11-20 11:30:52.407428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.922 [2024-11-20 11:30:52.407436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.922 [2024-11-20 11:30:52.407442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.922 [2024-11-20 11:30:52.407457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.922 qpair failed and we were unable to recover it. 00:29:59.922 [2024-11-20 11:30:52.417314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.922 [2024-11-20 11:30:52.417412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.922 [2024-11-20 11:30:52.417426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.922 [2024-11-20 11:30:52.417437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.922 [2024-11-20 11:30:52.417445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.922 [2024-11-20 11:30:52.417459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.922 qpair failed and we were unable to recover it. 00:29:59.922 [2024-11-20 11:30:52.427442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.922 [2024-11-20 11:30:52.427493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.922 [2024-11-20 11:30:52.427506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.922 [2024-11-20 11:30:52.427513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.922 [2024-11-20 11:30:52.427520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.922 [2024-11-20 11:30:52.427534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.922 qpair failed and we were unable to recover it. 00:29:59.922 [2024-11-20 11:30:52.437510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.922 [2024-11-20 11:30:52.437560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.922 [2024-11-20 11:30:52.437573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.922 [2024-11-20 11:30:52.437581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.922 [2024-11-20 11:30:52.437588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.922 [2024-11-20 11:30:52.437602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.922 qpair failed and we were unable to recover it. 00:29:59.922 [2024-11-20 11:30:52.447486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.922 [2024-11-20 11:30:52.447538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.922 [2024-11-20 11:30:52.447551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.922 [2024-11-20 11:30:52.447558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.922 [2024-11-20 11:30:52.447565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.922 [2024-11-20 11:30:52.447579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.922 qpair failed and we were unable to recover it. 00:29:59.922 [2024-11-20 11:30:52.457546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.922 [2024-11-20 11:30:52.457602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.922 [2024-11-20 11:30:52.457615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.922 [2024-11-20 11:30:52.457624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.922 [2024-11-20 11:30:52.457631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.923 [2024-11-20 11:30:52.457645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.923 qpair failed and we were unable to recover it. 00:29:59.923 [2024-11-20 11:30:52.467605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.923 [2024-11-20 11:30:52.467659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.923 [2024-11-20 11:30:52.467673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.923 [2024-11-20 11:30:52.467680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.923 [2024-11-20 11:30:52.467687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.923 [2024-11-20 11:30:52.467701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.923 qpair failed and we were unable to recover it. 00:29:59.923 [2024-11-20 11:30:52.477599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.923 [2024-11-20 11:30:52.477651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.923 [2024-11-20 11:30:52.477664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.923 [2024-11-20 11:30:52.477671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.923 [2024-11-20 11:30:52.477678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.923 [2024-11-20 11:30:52.477692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.923 qpair failed and we were unable to recover it. 00:29:59.923 [2024-11-20 11:30:52.487566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.923 [2024-11-20 11:30:52.487612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.923 [2024-11-20 11:30:52.487626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.923 [2024-11-20 11:30:52.487632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.923 [2024-11-20 11:30:52.487639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.923 [2024-11-20 11:30:52.487653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.923 qpair failed and we were unable to recover it. 00:29:59.923 [2024-11-20 11:30:52.497684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.923 [2024-11-20 11:30:52.497739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.923 [2024-11-20 11:30:52.497752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.923 [2024-11-20 11:30:52.497759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.923 [2024-11-20 11:30:52.497766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.923 [2024-11-20 11:30:52.497781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.923 qpair failed and we were unable to recover it. 00:29:59.923 [2024-11-20 11:30:52.507689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.923 [2024-11-20 11:30:52.507750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.923 [2024-11-20 11:30:52.507763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.923 [2024-11-20 11:30:52.507770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.923 [2024-11-20 11:30:52.507777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.923 [2024-11-20 11:30:52.507791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.923 qpair failed and we were unable to recover it. 00:29:59.923 [2024-11-20 11:30:52.517657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.923 [2024-11-20 11:30:52.517713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.923 [2024-11-20 11:30:52.517726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.923 [2024-11-20 11:30:52.517733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.923 [2024-11-20 11:30:52.517740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.923 [2024-11-20 11:30:52.517754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.923 qpair failed and we were unable to recover it. 00:29:59.923 [2024-11-20 11:30:52.527655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.923 [2024-11-20 11:30:52.527706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.923 [2024-11-20 11:30:52.527720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.923 [2024-11-20 11:30:52.527727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.923 [2024-11-20 11:30:52.527734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.923 [2024-11-20 11:30:52.527748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.923 qpair failed and we were unable to recover it. 00:29:59.923 [2024-11-20 11:30:52.537755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.923 [2024-11-20 11:30:52.537812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.923 [2024-11-20 11:30:52.537826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.923 [2024-11-20 11:30:52.537833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.923 [2024-11-20 11:30:52.537840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.923 [2024-11-20 11:30:52.537854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.923 qpair failed and we were unable to recover it. 00:29:59.923 [2024-11-20 11:30:52.547790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.923 [2024-11-20 11:30:52.547888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.923 [2024-11-20 11:30:52.547901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.923 [2024-11-20 11:30:52.547913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.923 [2024-11-20 11:30:52.547920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.923 [2024-11-20 11:30:52.547935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.923 qpair failed and we were unable to recover it. 00:29:59.923 [2024-11-20 11:30:52.557793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.923 [2024-11-20 11:30:52.557842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.923 [2024-11-20 11:30:52.557856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.923 [2024-11-20 11:30:52.557863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.923 [2024-11-20 11:30:52.557870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.923 [2024-11-20 11:30:52.557884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.923 qpair failed and we were unable to recover it. 00:29:59.923 [2024-11-20 11:30:52.567790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.923 [2024-11-20 11:30:52.567839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.923 [2024-11-20 11:30:52.567852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.923 [2024-11-20 11:30:52.567859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.923 [2024-11-20 11:30:52.567866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.923 [2024-11-20 11:30:52.567880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.923 qpair failed and we were unable to recover it. 00:29:59.923 [2024-11-20 11:30:52.577870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.923 [2024-11-20 11:30:52.577936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.923 [2024-11-20 11:30:52.577949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.923 [2024-11-20 11:30:52.577957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.923 [2024-11-20 11:30:52.577964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.923 [2024-11-20 11:30:52.577978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.923 qpair failed and we were unable to recover it. 00:29:59.923 [2024-11-20 11:30:52.587774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.923 [2024-11-20 11:30:52.587840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.923 [2024-11-20 11:30:52.587854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.923 [2024-11-20 11:30:52.587861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.923 [2024-11-20 11:30:52.587867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.923 [2024-11-20 11:30:52.587885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.923 qpair failed and we were unable to recover it. 00:29:59.923 [2024-11-20 11:30:52.597958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.923 [2024-11-20 11:30:52.598029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.923 [2024-11-20 11:30:52.598042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.923 [2024-11-20 11:30:52.598049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.923 [2024-11-20 11:30:52.598056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.923 [2024-11-20 11:30:52.598070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.923 qpair failed and we were unable to recover it. 00:29:59.923 [2024-11-20 11:30:52.607940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.923 [2024-11-20 11:30:52.607987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.923 [2024-11-20 11:30:52.608000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.923 [2024-11-20 11:30:52.608007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.923 [2024-11-20 11:30:52.608014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.923 [2024-11-20 11:30:52.608028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.923 qpair failed and we were unable to recover it. 00:29:59.923 [2024-11-20 11:30:52.617966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.923 [2024-11-20 11:30:52.618019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.923 [2024-11-20 11:30:52.618032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.923 [2024-11-20 11:30:52.618039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.923 [2024-11-20 11:30:52.618046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.923 [2024-11-20 11:30:52.618060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.923 qpair failed and we were unable to recover it. 00:29:59.923 [2024-11-20 11:30:52.628013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.923 [2024-11-20 11:30:52.628070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.923 [2024-11-20 11:30:52.628083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.923 [2024-11-20 11:30:52.628090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.923 [2024-11-20 11:30:52.628097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.923 [2024-11-20 11:30:52.628111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.923 qpair failed and we were unable to recover it. 00:29:59.923 [2024-11-20 11:30:52.638033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.923 [2024-11-20 11:30:52.638095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.923 [2024-11-20 11:30:52.638109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.923 [2024-11-20 11:30:52.638116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.923 [2024-11-20 11:30:52.638123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.923 [2024-11-20 11:30:52.638137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.923 qpair failed and we were unable to recover it. 00:29:59.923 [2024-11-20 11:30:52.647993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.923 [2024-11-20 11:30:52.648041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.923 [2024-11-20 11:30:52.648054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.923 [2024-11-20 11:30:52.648062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.923 [2024-11-20 11:30:52.648069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.923 [2024-11-20 11:30:52.648083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.923 qpair failed and we were unable to recover it. 00:29:59.923 [2024-11-20 11:30:52.658087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.923 [2024-11-20 11:30:52.658143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.923 [2024-11-20 11:30:52.658162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.923 [2024-11-20 11:30:52.658170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.923 [2024-11-20 11:30:52.658177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:29:59.923 [2024-11-20 11:30:52.658192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.923 qpair failed and we were unable to recover it. 00:30:00.185 [2024-11-20 11:30:52.668070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.185 [2024-11-20 11:30:52.668119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.185 [2024-11-20 11:30:52.668133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.185 [2024-11-20 11:30:52.668141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.185 [2024-11-20 11:30:52.668147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.185 [2024-11-20 11:30:52.668165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.185 qpair failed and we were unable to recover it. 00:30:00.185 [2024-11-20 11:30:52.678124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.185 [2024-11-20 11:30:52.678178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.185 [2024-11-20 11:30:52.678195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.185 [2024-11-20 11:30:52.678203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.185 [2024-11-20 11:30:52.678210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.185 [2024-11-20 11:30:52.678224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.185 qpair failed and we were unable to recover it. 00:30:00.185 [2024-11-20 11:30:52.688134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.185 [2024-11-20 11:30:52.688184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.185 [2024-11-20 11:30:52.688198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.185 [2024-11-20 11:30:52.688205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.185 [2024-11-20 11:30:52.688212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.185 [2024-11-20 11:30:52.688227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.185 qpair failed and we were unable to recover it. 00:30:00.185 [2024-11-20 11:30:52.698188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.185 [2024-11-20 11:30:52.698245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.185 [2024-11-20 11:30:52.698259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.185 [2024-11-20 11:30:52.698266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.185 [2024-11-20 11:30:52.698272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.185 [2024-11-20 11:30:52.698286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.185 qpair failed and we were unable to recover it. 00:30:00.185 [2024-11-20 11:30:52.708190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.185 [2024-11-20 11:30:52.708246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.186 [2024-11-20 11:30:52.708259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.186 [2024-11-20 11:30:52.708266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.186 [2024-11-20 11:30:52.708272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.186 [2024-11-20 11:30:52.708286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.186 qpair failed and we were unable to recover it. 00:30:00.186 [2024-11-20 11:30:52.718223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.186 [2024-11-20 11:30:52.718277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.186 [2024-11-20 11:30:52.718290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.186 [2024-11-20 11:30:52.718297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.186 [2024-11-20 11:30:52.718307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.186 [2024-11-20 11:30:52.718322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.186 qpair failed and we were unable to recover it. 00:30:00.186 [2024-11-20 11:30:52.728227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.186 [2024-11-20 11:30:52.728274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.186 [2024-11-20 11:30:52.728287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.186 [2024-11-20 11:30:52.728294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.186 [2024-11-20 11:30:52.728301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.186 [2024-11-20 11:30:52.728315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.186 qpair failed and we were unable to recover it. 00:30:00.186 [2024-11-20 11:30:52.738291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.186 [2024-11-20 11:30:52.738348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.186 [2024-11-20 11:30:52.738362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.186 [2024-11-20 11:30:52.738369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.186 [2024-11-20 11:30:52.738375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.186 [2024-11-20 11:30:52.738389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.186 qpair failed and we were unable to recover it. 00:30:00.186 [2024-11-20 11:30:52.748306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.186 [2024-11-20 11:30:52.748359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.186 [2024-11-20 11:30:52.748373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.186 [2024-11-20 11:30:52.748380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.186 [2024-11-20 11:30:52.748387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.186 [2024-11-20 11:30:52.748401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.186 qpair failed and we were unable to recover it. 00:30:00.186 [2024-11-20 11:30:52.758340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.186 [2024-11-20 11:30:52.758393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.186 [2024-11-20 11:30:52.758408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.186 [2024-11-20 11:30:52.758416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.186 [2024-11-20 11:30:52.758423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.186 [2024-11-20 11:30:52.758441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.186 qpair failed and we were unable to recover it. 00:30:00.186 [2024-11-20 11:30:52.768367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.186 [2024-11-20 11:30:52.768417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.186 [2024-11-20 11:30:52.768431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.186 [2024-11-20 11:30:52.768438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.186 [2024-11-20 11:30:52.768445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.186 [2024-11-20 11:30:52.768460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.186 qpair failed and we were unable to recover it. 00:30:00.186 [2024-11-20 11:30:52.778407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.186 [2024-11-20 11:30:52.778463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.186 [2024-11-20 11:30:52.778476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.186 [2024-11-20 11:30:52.778483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.186 [2024-11-20 11:30:52.778490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.186 [2024-11-20 11:30:52.778504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.186 qpair failed and we were unable to recover it. 00:30:00.186 [2024-11-20 11:30:52.788305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.186 [2024-11-20 11:30:52.788355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.186 [2024-11-20 11:30:52.788368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.186 [2024-11-20 11:30:52.788376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.186 [2024-11-20 11:30:52.788382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.186 [2024-11-20 11:30:52.788397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.186 qpair failed and we were unable to recover it. 00:30:00.186 [2024-11-20 11:30:52.798453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.186 [2024-11-20 11:30:52.798507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.186 [2024-11-20 11:30:52.798521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.186 [2024-11-20 11:30:52.798528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.186 [2024-11-20 11:30:52.798535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.186 [2024-11-20 11:30:52.798549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.186 qpair failed and we were unable to recover it. 00:30:00.186 [2024-11-20 11:30:52.808442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.186 [2024-11-20 11:30:52.808492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.186 [2024-11-20 11:30:52.808509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.186 [2024-11-20 11:30:52.808518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.186 [2024-11-20 11:30:52.808526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.186 [2024-11-20 11:30:52.808541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.186 qpair failed and we were unable to recover it. 00:30:00.186 [2024-11-20 11:30:52.818490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.186 [2024-11-20 11:30:52.818544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.186 [2024-11-20 11:30:52.818557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.187 [2024-11-20 11:30:52.818565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.187 [2024-11-20 11:30:52.818571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.187 [2024-11-20 11:30:52.818585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.187 qpair failed and we were unable to recover it. 00:30:00.187 [2024-11-20 11:30:52.828513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.187 [2024-11-20 11:30:52.828562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.187 [2024-11-20 11:30:52.828575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.187 [2024-11-20 11:30:52.828582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.187 [2024-11-20 11:30:52.828589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.187 [2024-11-20 11:30:52.828604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.187 qpair failed and we were unable to recover it. 00:30:00.187 [2024-11-20 11:30:52.838567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.187 [2024-11-20 11:30:52.838619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.187 [2024-11-20 11:30:52.838632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.187 [2024-11-20 11:30:52.838639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.187 [2024-11-20 11:30:52.838645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.187 [2024-11-20 11:30:52.838660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.187 qpair failed and we were unable to recover it. 00:30:00.187 [2024-11-20 11:30:52.848563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.187 [2024-11-20 11:30:52.848612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.187 [2024-11-20 11:30:52.848625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.187 [2024-11-20 11:30:52.848632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.187 [2024-11-20 11:30:52.848642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.187 [2024-11-20 11:30:52.848657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.187 qpair failed and we were unable to recover it. 00:30:00.187 [2024-11-20 11:30:52.858609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.187 [2024-11-20 11:30:52.858661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.187 [2024-11-20 11:30:52.858674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.187 [2024-11-20 11:30:52.858682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.187 [2024-11-20 11:30:52.858689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.187 [2024-11-20 11:30:52.858703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.187 qpair failed and we were unable to recover it. 00:30:00.187 [2024-11-20 11:30:52.868499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.187 [2024-11-20 11:30:52.868548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.187 [2024-11-20 11:30:52.868562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.187 [2024-11-20 11:30:52.868569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.187 [2024-11-20 11:30:52.868575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.187 [2024-11-20 11:30:52.868589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.187 qpair failed and we were unable to recover it. 00:30:00.187 [2024-11-20 11:30:52.878708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.187 [2024-11-20 11:30:52.878789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.187 [2024-11-20 11:30:52.878803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.187 [2024-11-20 11:30:52.878810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.187 [2024-11-20 11:30:52.878817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.187 [2024-11-20 11:30:52.878832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.187 qpair failed and we were unable to recover it. 00:30:00.187 [2024-11-20 11:30:52.888677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.187 [2024-11-20 11:30:52.888727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.187 [2024-11-20 11:30:52.888740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.187 [2024-11-20 11:30:52.888747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.187 [2024-11-20 11:30:52.888754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.187 [2024-11-20 11:30:52.888769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.187 qpair failed and we were unable to recover it. 00:30:00.187 [2024-11-20 11:30:52.898626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.187 [2024-11-20 11:30:52.898679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.187 [2024-11-20 11:30:52.898694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.187 [2024-11-20 11:30:52.898701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.187 [2024-11-20 11:30:52.898708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.187 [2024-11-20 11:30:52.898723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.187 qpair failed and we were unable to recover it. 00:30:00.187 [2024-11-20 11:30:52.908748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.187 [2024-11-20 11:30:52.908795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.187 [2024-11-20 11:30:52.908809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.187 [2024-11-20 11:30:52.908816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.187 [2024-11-20 11:30:52.908823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.187 [2024-11-20 11:30:52.908837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.187 qpair failed and we were unable to recover it. 00:30:00.187 [2024-11-20 11:30:52.918789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.187 [2024-11-20 11:30:52.918875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.187 [2024-11-20 11:30:52.918888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.187 [2024-11-20 11:30:52.918896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.187 [2024-11-20 11:30:52.918902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.187 [2024-11-20 11:30:52.918917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.187 qpair failed and we were unable to recover it. 00:30:00.448 [2024-11-20 11:30:52.928781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.448 [2024-11-20 11:30:52.928831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.448 [2024-11-20 11:30:52.928844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.449 [2024-11-20 11:30:52.928851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.449 [2024-11-20 11:30:52.928858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.449 [2024-11-20 11:30:52.928871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.449 qpair failed and we were unable to recover it. 00:30:00.449 [2024-11-20 11:30:52.938833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.449 [2024-11-20 11:30:52.938889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.449 [2024-11-20 11:30:52.938905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.449 [2024-11-20 11:30:52.938913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.449 [2024-11-20 11:30:52.938921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.449 [2024-11-20 11:30:52.938935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.449 qpair failed and we were unable to recover it. 00:30:00.449 [2024-11-20 11:30:52.948901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.449 [2024-11-20 11:30:52.948980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.449 [2024-11-20 11:30:52.948993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.449 [2024-11-20 11:30:52.949000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.449 [2024-11-20 11:30:52.949007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.449 [2024-11-20 11:30:52.949021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.449 qpair failed and we were unable to recover it. 00:30:00.449 [2024-11-20 11:30:52.958909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.449 [2024-11-20 11:30:52.958972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.449 [2024-11-20 11:30:52.958985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.449 [2024-11-20 11:30:52.958992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.449 [2024-11-20 11:30:52.958999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.449 [2024-11-20 11:30:52.959012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.449 qpair failed and we were unable to recover it. 00:30:00.449 [2024-11-20 11:30:52.968892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.449 [2024-11-20 11:30:52.968977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.449 [2024-11-20 11:30:52.968990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.449 [2024-11-20 11:30:52.968998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.449 [2024-11-20 11:30:52.969004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.449 [2024-11-20 11:30:52.969018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.449 qpair failed and we were unable to recover it. 00:30:00.449 [2024-11-20 11:30:52.978967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.449 [2024-11-20 11:30:52.979019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.449 [2024-11-20 11:30:52.979032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.449 [2024-11-20 11:30:52.979043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.449 [2024-11-20 11:30:52.979050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.449 [2024-11-20 11:30:52.979064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.449 qpair failed and we were unable to recover it. 00:30:00.449 [2024-11-20 11:30:52.988945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.449 [2024-11-20 11:30:52.989044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.449 [2024-11-20 11:30:52.989058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.449 [2024-11-20 11:30:52.989066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.449 [2024-11-20 11:30:52.989073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.449 [2024-11-20 11:30:52.989087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.449 qpair failed and we were unable to recover it. 00:30:00.449 [2024-11-20 11:30:52.998991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.449 [2024-11-20 11:30:52.999050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.449 [2024-11-20 11:30:52.999063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.449 [2024-11-20 11:30:52.999071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.449 [2024-11-20 11:30:52.999078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.449 [2024-11-20 11:30:52.999092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.449 qpair failed and we were unable to recover it. 00:30:00.449 [2024-11-20 11:30:53.008999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.449 [2024-11-20 11:30:53.009046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.449 [2024-11-20 11:30:53.009060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.449 [2024-11-20 11:30:53.009067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.449 [2024-11-20 11:30:53.009074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.449 [2024-11-20 11:30:53.009088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.449 qpair failed and we were unable to recover it. 00:30:00.449 [2024-11-20 11:30:53.019038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.449 [2024-11-20 11:30:53.019094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.449 [2024-11-20 11:30:53.019107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.449 [2024-11-20 11:30:53.019114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.449 [2024-11-20 11:30:53.019121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.449 [2024-11-20 11:30:53.019135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.449 qpair failed and we were unable to recover it. 00:30:00.449 [2024-11-20 11:30:53.029093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.449 [2024-11-20 11:30:53.029141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.449 [2024-11-20 11:30:53.029154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.449 [2024-11-20 11:30:53.029165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.449 [2024-11-20 11:30:53.029172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.449 [2024-11-20 11:30:53.029186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.449 qpair failed and we were unable to recover it. 00:30:00.449 [2024-11-20 11:30:53.039117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.450 [2024-11-20 11:30:53.039179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.450 [2024-11-20 11:30:53.039194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.450 [2024-11-20 11:30:53.039201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.450 [2024-11-20 11:30:53.039209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.450 [2024-11-20 11:30:53.039226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.450 qpair failed and we were unable to recover it. 00:30:00.450 [2024-11-20 11:30:53.049118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.450 [2024-11-20 11:30:53.049172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.450 [2024-11-20 11:30:53.049186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.450 [2024-11-20 11:30:53.049194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.450 [2024-11-20 11:30:53.049201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.450 [2024-11-20 11:30:53.049216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.450 qpair failed and we were unable to recover it. 00:30:00.450 [2024-11-20 11:30:53.059177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.450 [2024-11-20 11:30:53.059235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.450 [2024-11-20 11:30:53.059248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.450 [2024-11-20 11:30:53.059255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.450 [2024-11-20 11:30:53.059262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.450 [2024-11-20 11:30:53.059276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.450 qpair failed and we were unable to recover it. 00:30:00.450 [2024-11-20 11:30:53.069165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.450 [2024-11-20 11:30:53.069215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.450 [2024-11-20 11:30:53.069228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.450 [2024-11-20 11:30:53.069235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.450 [2024-11-20 11:30:53.069242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.450 [2024-11-20 11:30:53.069256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.450 qpair failed and we were unable to recover it. 00:30:00.450 [2024-11-20 11:30:53.079228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.450 [2024-11-20 11:30:53.079278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.450 [2024-11-20 11:30:53.079291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.450 [2024-11-20 11:30:53.079299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.450 [2024-11-20 11:30:53.079305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.450 [2024-11-20 11:30:53.079320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.450 qpair failed and we were unable to recover it. 00:30:00.450 [2024-11-20 11:30:53.089098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.450 [2024-11-20 11:30:53.089144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.450 [2024-11-20 11:30:53.089157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.450 [2024-11-20 11:30:53.089168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.450 [2024-11-20 11:30:53.089175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.450 [2024-11-20 11:30:53.089189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.450 qpair failed and we were unable to recover it. 00:30:00.450 [2024-11-20 11:30:53.099285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.450 [2024-11-20 11:30:53.099348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.450 [2024-11-20 11:30:53.099362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.450 [2024-11-20 11:30:53.099369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.450 [2024-11-20 11:30:53.099376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.450 [2024-11-20 11:30:53.099390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.450 qpair failed and we were unable to recover it. 00:30:00.450 [2024-11-20 11:30:53.109259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.450 [2024-11-20 11:30:53.109346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.450 [2024-11-20 11:30:53.109359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.450 [2024-11-20 11:30:53.109371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.450 [2024-11-20 11:30:53.109378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.450 [2024-11-20 11:30:53.109392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.450 qpair failed and we were unable to recover it. 00:30:00.450 [2024-11-20 11:30:53.119335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.450 [2024-11-20 11:30:53.119391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.450 [2024-11-20 11:30:53.119403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.450 [2024-11-20 11:30:53.119412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.450 [2024-11-20 11:30:53.119418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.450 [2024-11-20 11:30:53.119432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.450 qpair failed and we were unable to recover it. 00:30:00.450 [2024-11-20 11:30:53.129297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.450 [2024-11-20 11:30:53.129344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.450 [2024-11-20 11:30:53.129357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.450 [2024-11-20 11:30:53.129364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.450 [2024-11-20 11:30:53.129370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.450 [2024-11-20 11:30:53.129384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.450 qpair failed and we were unable to recover it. 00:30:00.450 [2024-11-20 11:30:53.139397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.450 [2024-11-20 11:30:53.139468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.450 [2024-11-20 11:30:53.139482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.450 [2024-11-20 11:30:53.139489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.450 [2024-11-20 11:30:53.139495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.450 [2024-11-20 11:30:53.139510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.450 qpair failed and we were unable to recover it. 00:30:00.451 [2024-11-20 11:30:53.149363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.451 [2024-11-20 11:30:53.149414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.451 [2024-11-20 11:30:53.149427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.451 [2024-11-20 11:30:53.149435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.451 [2024-11-20 11:30:53.149441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.451 [2024-11-20 11:30:53.149459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.451 qpair failed and we were unable to recover it. 00:30:00.451 [2024-11-20 11:30:53.159440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.451 [2024-11-20 11:30:53.159535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.451 [2024-11-20 11:30:53.159549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.451 [2024-11-20 11:30:53.159556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.451 [2024-11-20 11:30:53.159562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.451 [2024-11-20 11:30:53.159577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.451 qpair failed and we were unable to recover it. 00:30:00.451 [2024-11-20 11:30:53.169451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.451 [2024-11-20 11:30:53.169499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.451 [2024-11-20 11:30:53.169513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.451 [2024-11-20 11:30:53.169520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.451 [2024-11-20 11:30:53.169526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.451 [2024-11-20 11:30:53.169540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.451 qpair failed and we were unable to recover it. 00:30:00.451 [2024-11-20 11:30:53.179517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.451 [2024-11-20 11:30:53.179570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.451 [2024-11-20 11:30:53.179583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.451 [2024-11-20 11:30:53.179591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.451 [2024-11-20 11:30:53.179597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.451 [2024-11-20 11:30:53.179611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.451 qpair failed and we were unable to recover it. 00:30:00.712 [2024-11-20 11:30:53.189497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.712 [2024-11-20 11:30:53.189547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.712 [2024-11-20 11:30:53.189559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.712 [2024-11-20 11:30:53.189567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.712 [2024-11-20 11:30:53.189573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.712 [2024-11-20 11:30:53.189587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.712 qpair failed and we were unable to recover it. 00:30:00.712 [2024-11-20 11:30:53.199556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.712 [2024-11-20 11:30:53.199609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.712 [2024-11-20 11:30:53.199622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.712 [2024-11-20 11:30:53.199629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.712 [2024-11-20 11:30:53.199636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.712 [2024-11-20 11:30:53.199650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.712 qpair failed and we were unable to recover it. 00:30:00.712 [2024-11-20 11:30:53.209530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.712 [2024-11-20 11:30:53.209607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.712 [2024-11-20 11:30:53.209621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.712 [2024-11-20 11:30:53.209628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.712 [2024-11-20 11:30:53.209634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.712 [2024-11-20 11:30:53.209649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.712 qpair failed and we were unable to recover it. 00:30:00.712 [2024-11-20 11:30:53.219607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.712 [2024-11-20 11:30:53.219709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.712 [2024-11-20 11:30:53.219723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.712 [2024-11-20 11:30:53.219730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.712 [2024-11-20 11:30:53.219737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.712 [2024-11-20 11:30:53.219751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.712 qpair failed and we were unable to recover it. 00:30:00.712 [2024-11-20 11:30:53.229603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.713 [2024-11-20 11:30:53.229653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.713 [2024-11-20 11:30:53.229666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.713 [2024-11-20 11:30:53.229673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.713 [2024-11-20 11:30:53.229680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.713 [2024-11-20 11:30:53.229694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.713 qpair failed and we were unable to recover it. 00:30:00.713 [2024-11-20 11:30:53.239637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.713 [2024-11-20 11:30:53.239695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.713 [2024-11-20 11:30:53.239711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.713 [2024-11-20 11:30:53.239719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.713 [2024-11-20 11:30:53.239725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.713 [2024-11-20 11:30:53.239739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.713 qpair failed and we were unable to recover it. 00:30:00.713 [2024-11-20 11:30:53.249660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.713 [2024-11-20 11:30:53.249702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.713 [2024-11-20 11:30:53.249715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.713 [2024-11-20 11:30:53.249722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.713 [2024-11-20 11:30:53.249728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.713 [2024-11-20 11:30:53.249742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.713 qpair failed and we were unable to recover it. 00:30:00.713 [2024-11-20 11:30:53.259698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.713 [2024-11-20 11:30:53.259764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.713 [2024-11-20 11:30:53.259777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.713 [2024-11-20 11:30:53.259785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.713 [2024-11-20 11:30:53.259791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.713 [2024-11-20 11:30:53.259805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.713 qpair failed and we were unable to recover it. 00:30:00.713 [2024-11-20 11:30:53.269715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.713 [2024-11-20 11:30:53.269766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.713 [2024-11-20 11:30:53.269779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.713 [2024-11-20 11:30:53.269787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.713 [2024-11-20 11:30:53.269794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.713 [2024-11-20 11:30:53.269808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.713 qpair failed and we were unable to recover it. 00:30:00.713 [2024-11-20 11:30:53.279649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.713 [2024-11-20 11:30:53.279705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.713 [2024-11-20 11:30:53.279719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.713 [2024-11-20 11:30:53.279726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.713 [2024-11-20 11:30:53.279735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.713 [2024-11-20 11:30:53.279750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.713 qpair failed and we were unable to recover it. 00:30:00.713 [2024-11-20 11:30:53.289759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.713 [2024-11-20 11:30:53.289815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.713 [2024-11-20 11:30:53.289828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.713 [2024-11-20 11:30:53.289835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.713 [2024-11-20 11:30:53.289842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.713 [2024-11-20 11:30:53.289856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.713 qpair failed and we were unable to recover it. 00:30:00.713 [2024-11-20 11:30:53.299830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.713 [2024-11-20 11:30:53.299889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.713 [2024-11-20 11:30:53.299902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.713 [2024-11-20 11:30:53.299909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.713 [2024-11-20 11:30:53.299916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.713 [2024-11-20 11:30:53.299930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.713 qpair failed and we were unable to recover it. 00:30:00.713 [2024-11-20 11:30:53.309814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.713 [2024-11-20 11:30:53.309904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.713 [2024-11-20 11:30:53.309927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.713 [2024-11-20 11:30:53.309936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.713 [2024-11-20 11:30:53.309944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.713 [2024-11-20 11:30:53.309964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.713 qpair failed and we were unable to recover it. 00:30:00.713 [2024-11-20 11:30:53.319883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.713 [2024-11-20 11:30:53.319934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.713 [2024-11-20 11:30:53.319949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.713 [2024-11-20 11:30:53.319956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.713 [2024-11-20 11:30:53.319963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.713 [2024-11-20 11:30:53.319979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.713 qpair failed and we were unable to recover it. 00:30:00.713 [2024-11-20 11:30:53.329869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.713 [2024-11-20 11:30:53.329918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.713 [2024-11-20 11:30:53.329932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.713 [2024-11-20 11:30:53.329939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.713 [2024-11-20 11:30:53.329946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.713 [2024-11-20 11:30:53.329960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.713 qpair failed and we were unable to recover it. 00:30:00.713 [2024-11-20 11:30:53.339944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.713 [2024-11-20 11:30:53.340035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.713 [2024-11-20 11:30:53.340048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.713 [2024-11-20 11:30:53.340055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.713 [2024-11-20 11:30:53.340062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.713 [2024-11-20 11:30:53.340078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.713 qpair failed and we were unable to recover it. 00:30:00.713 [2024-11-20 11:30:53.349819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.713 [2024-11-20 11:30:53.349868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.713 [2024-11-20 11:30:53.349882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.713 [2024-11-20 11:30:53.349889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.713 [2024-11-20 11:30:53.349895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.713 [2024-11-20 11:30:53.349909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.713 qpair failed and we were unable to recover it. 00:30:00.713 [2024-11-20 11:30:53.359998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.714 [2024-11-20 11:30:53.360046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.714 [2024-11-20 11:30:53.360059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.714 [2024-11-20 11:30:53.360066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.714 [2024-11-20 11:30:53.360073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.714 [2024-11-20 11:30:53.360087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.714 qpair failed and we were unable to recover it. 00:30:00.714 [2024-11-20 11:30:53.369953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.714 [2024-11-20 11:30:53.369999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.714 [2024-11-20 11:30:53.370016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.714 [2024-11-20 11:30:53.370024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.714 [2024-11-20 11:30:53.370030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.714 [2024-11-20 11:30:53.370045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.714 qpair failed and we were unable to recover it. 00:30:00.714 [2024-11-20 11:30:53.380057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.714 [2024-11-20 11:30:53.380112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.714 [2024-11-20 11:30:53.380125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.714 [2024-11-20 11:30:53.380132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.714 [2024-11-20 11:30:53.380139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.714 [2024-11-20 11:30:53.380153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.714 qpair failed and we were unable to recover it. 00:30:00.714 [2024-11-20 11:30:53.390043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.714 [2024-11-20 11:30:53.390097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.714 [2024-11-20 11:30:53.390111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.714 [2024-11-20 11:30:53.390118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.714 [2024-11-20 11:30:53.390125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.714 [2024-11-20 11:30:53.390139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.714 qpair failed and we were unable to recover it. 00:30:00.714 [2024-11-20 11:30:53.400088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.714 [2024-11-20 11:30:53.400141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.714 [2024-11-20 11:30:53.400154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.714 [2024-11-20 11:30:53.400166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.714 [2024-11-20 11:30:53.400172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.714 [2024-11-20 11:30:53.400187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.714 qpair failed and we were unable to recover it. 00:30:00.714 [2024-11-20 11:30:53.410104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.714 [2024-11-20 11:30:53.410198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.714 [2024-11-20 11:30:53.410212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.714 [2024-11-20 11:30:53.410220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.714 [2024-11-20 11:30:53.410230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.714 [2024-11-20 11:30:53.410245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.714 qpair failed and we were unable to recover it. 00:30:00.714 [2024-11-20 11:30:53.420194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.714 [2024-11-20 11:30:53.420279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.714 [2024-11-20 11:30:53.420291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.714 [2024-11-20 11:30:53.420299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.714 [2024-11-20 11:30:53.420306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.714 [2024-11-20 11:30:53.420320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.714 qpair failed and we were unable to recover it. 00:30:00.714 [2024-11-20 11:30:53.430151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.714 [2024-11-20 11:30:53.430213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.714 [2024-11-20 11:30:53.430228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.714 [2024-11-20 11:30:53.430235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.714 [2024-11-20 11:30:53.430241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.714 [2024-11-20 11:30:53.430265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.714 qpair failed and we were unable to recover it. 00:30:00.714 [2024-11-20 11:30:53.440211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.714 [2024-11-20 11:30:53.440260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.714 [2024-11-20 11:30:53.440273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.714 [2024-11-20 11:30:53.440280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.714 [2024-11-20 11:30:53.440287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.714 [2024-11-20 11:30:53.440301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.714 qpair failed and we were unable to recover it. 00:30:00.976 [2024-11-20 11:30:53.450183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.976 [2024-11-20 11:30:53.450234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.976 [2024-11-20 11:30:53.450247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.976 [2024-11-20 11:30:53.450255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.976 [2024-11-20 11:30:53.450262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.976 [2024-11-20 11:30:53.450277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.976 qpair failed and we were unable to recover it. 00:30:00.976 [2024-11-20 11:30:53.460272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.976 [2024-11-20 11:30:53.460327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.976 [2024-11-20 11:30:53.460340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.976 [2024-11-20 11:30:53.460347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.976 [2024-11-20 11:30:53.460354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.976 [2024-11-20 11:30:53.460369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.976 qpair failed and we were unable to recover it. 00:30:00.976 [2024-11-20 11:30:53.470258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.976 [2024-11-20 11:30:53.470314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.976 [2024-11-20 11:30:53.470327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.976 [2024-11-20 11:30:53.470334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.976 [2024-11-20 11:30:53.470341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.976 [2024-11-20 11:30:53.470355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.976 qpair failed and we were unable to recover it. 00:30:00.976 [2024-11-20 11:30:53.480297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.976 [2024-11-20 11:30:53.480345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.976 [2024-11-20 11:30:53.480359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.976 [2024-11-20 11:30:53.480366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.976 [2024-11-20 11:30:53.480373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.976 [2024-11-20 11:30:53.480387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.976 qpair failed and we were unable to recover it. 00:30:00.976 [2024-11-20 11:30:53.490307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.976 [2024-11-20 11:30:53.490360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.976 [2024-11-20 11:30:53.490373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.976 [2024-11-20 11:30:53.490380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.976 [2024-11-20 11:30:53.490387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.976 [2024-11-20 11:30:53.490401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.976 qpair failed and we were unable to recover it. 00:30:00.976 [2024-11-20 11:30:53.500402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.976 [2024-11-20 11:30:53.500487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.976 [2024-11-20 11:30:53.500504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.976 [2024-11-20 11:30:53.500511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.976 [2024-11-20 11:30:53.500517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.976 [2024-11-20 11:30:53.500531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.976 qpair failed and we were unable to recover it. 00:30:00.976 [2024-11-20 11:30:53.510354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.976 [2024-11-20 11:30:53.510405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.976 [2024-11-20 11:30:53.510418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.976 [2024-11-20 11:30:53.510426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.976 [2024-11-20 11:30:53.510432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.976 [2024-11-20 11:30:53.510446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.976 qpair failed and we were unable to recover it. 00:30:00.976 [2024-11-20 11:30:53.520450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.976 [2024-11-20 11:30:53.520500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.976 [2024-11-20 11:30:53.520513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.976 [2024-11-20 11:30:53.520520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.976 [2024-11-20 11:30:53.520527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.976 [2024-11-20 11:30:53.520541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.976 qpair failed and we were unable to recover it. 00:30:00.976 [2024-11-20 11:30:53.530455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.976 [2024-11-20 11:30:53.530535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.976 [2024-11-20 11:30:53.530548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.976 [2024-11-20 11:30:53.530556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.976 [2024-11-20 11:30:53.530562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.976 [2024-11-20 11:30:53.530576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.976 qpair failed and we were unable to recover it. 00:30:00.976 [2024-11-20 11:30:53.540456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.976 [2024-11-20 11:30:53.540517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.976 [2024-11-20 11:30:53.540530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.976 [2024-11-20 11:30:53.540545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.976 [2024-11-20 11:30:53.540551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.976 [2024-11-20 11:30:53.540566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.977 qpair failed and we were unable to recover it. 00:30:00.977 [2024-11-20 11:30:53.550479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.977 [2024-11-20 11:30:53.550539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.977 [2024-11-20 11:30:53.550552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.977 [2024-11-20 11:30:53.550559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.977 [2024-11-20 11:30:53.550566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.977 [2024-11-20 11:30:53.550580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.977 qpair failed and we were unable to recover it. 00:30:00.977 [2024-11-20 11:30:53.560533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.977 [2024-11-20 11:30:53.560580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.977 [2024-11-20 11:30:53.560593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.977 [2024-11-20 11:30:53.560600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.977 [2024-11-20 11:30:53.560606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.977 [2024-11-20 11:30:53.560620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.977 qpair failed and we were unable to recover it. 00:30:00.977 [2024-11-20 11:30:53.570505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.977 [2024-11-20 11:30:53.570551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.977 [2024-11-20 11:30:53.570563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.977 [2024-11-20 11:30:53.570571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.977 [2024-11-20 11:30:53.570577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.977 [2024-11-20 11:30:53.570591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.977 qpair failed and we were unable to recover it. 00:30:00.977 [2024-11-20 11:30:53.580562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.977 [2024-11-20 11:30:53.580615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.977 [2024-11-20 11:30:53.580628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.977 [2024-11-20 11:30:53.580635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.977 [2024-11-20 11:30:53.580642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.977 [2024-11-20 11:30:53.580659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.977 qpair failed and we were unable to recover it. 00:30:00.977 [2024-11-20 11:30:53.590583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.977 [2024-11-20 11:30:53.590636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.977 [2024-11-20 11:30:53.590649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.977 [2024-11-20 11:30:53.590657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.977 [2024-11-20 11:30:53.590664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.977 [2024-11-20 11:30:53.590678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.977 qpair failed and we were unable to recover it. 00:30:00.977 [2024-11-20 11:30:53.600665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.977 [2024-11-20 11:30:53.600733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.977 [2024-11-20 11:30:53.600747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.977 [2024-11-20 11:30:53.600754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.977 [2024-11-20 11:30:53.600761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.977 [2024-11-20 11:30:53.600775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.977 qpair failed and we were unable to recover it. 00:30:00.977 [2024-11-20 11:30:53.610521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.977 [2024-11-20 11:30:53.610600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.977 [2024-11-20 11:30:53.610613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.977 [2024-11-20 11:30:53.610620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.977 [2024-11-20 11:30:53.610626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.977 [2024-11-20 11:30:53.610641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.977 qpair failed and we were unable to recover it. 00:30:00.977 [2024-11-20 11:30:53.620718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.977 [2024-11-20 11:30:53.620771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.977 [2024-11-20 11:30:53.620784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.977 [2024-11-20 11:30:53.620792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.977 [2024-11-20 11:30:53.620798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.977 [2024-11-20 11:30:53.620813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.977 qpair failed and we were unable to recover it. 00:30:00.977 [2024-11-20 11:30:53.630716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.977 [2024-11-20 11:30:53.630778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.977 [2024-11-20 11:30:53.630791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.977 [2024-11-20 11:30:53.630799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.977 [2024-11-20 11:30:53.630805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.977 [2024-11-20 11:30:53.630819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.977 qpair failed and we were unable to recover it. 00:30:00.977 [2024-11-20 11:30:53.640772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.977 [2024-11-20 11:30:53.640822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.977 [2024-11-20 11:30:53.640835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.977 [2024-11-20 11:30:53.640842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.977 [2024-11-20 11:30:53.640848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.977 [2024-11-20 11:30:53.640862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.977 qpair failed and we were unable to recover it. 00:30:00.977 [2024-11-20 11:30:53.650753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.978 [2024-11-20 11:30:53.650814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.978 [2024-11-20 11:30:53.650827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.978 [2024-11-20 11:30:53.650835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.978 [2024-11-20 11:30:53.650841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.978 [2024-11-20 11:30:53.650855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.978 qpair failed and we were unable to recover it. 00:30:00.978 [2024-11-20 11:30:53.660802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.978 [2024-11-20 11:30:53.660864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.978 [2024-11-20 11:30:53.660877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.978 [2024-11-20 11:30:53.660885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.978 [2024-11-20 11:30:53.660892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.978 [2024-11-20 11:30:53.660906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.978 qpair failed and we were unable to recover it. 00:30:00.978 [2024-11-20 11:30:53.670826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.978 [2024-11-20 11:30:53.670887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.978 [2024-11-20 11:30:53.670911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.978 [2024-11-20 11:30:53.670924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.978 [2024-11-20 11:30:53.670931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.978 [2024-11-20 11:30:53.670951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.978 qpair failed and we were unable to recover it. 00:30:00.978 [2024-11-20 11:30:53.680854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.978 [2024-11-20 11:30:53.680935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.978 [2024-11-20 11:30:53.680959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.978 [2024-11-20 11:30:53.680968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.978 [2024-11-20 11:30:53.680975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.978 [2024-11-20 11:30:53.680994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.978 qpair failed and we were unable to recover it. 00:30:00.978 [2024-11-20 11:30:53.690852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.978 [2024-11-20 11:30:53.690907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.978 [2024-11-20 11:30:53.690930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.978 [2024-11-20 11:30:53.690939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.978 [2024-11-20 11:30:53.690946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.978 [2024-11-20 11:30:53.690966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.978 qpair failed and we were unable to recover it. 00:30:00.978 [2024-11-20 11:30:53.700902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.978 [2024-11-20 11:30:53.700972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.978 [2024-11-20 11:30:53.700986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.978 [2024-11-20 11:30:53.700993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.978 [2024-11-20 11:30:53.701000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.978 [2024-11-20 11:30:53.701015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.978 qpair failed and we were unable to recover it. 00:30:00.978 [2024-11-20 11:30:53.710909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.978 [2024-11-20 11:30:53.710960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.978 [2024-11-20 11:30:53.710973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.978 [2024-11-20 11:30:53.710980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.978 [2024-11-20 11:30:53.710987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:00.978 [2024-11-20 11:30:53.711006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.978 qpair failed and we were unable to recover it. 00:30:01.240 [2024-11-20 11:30:53.720979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.240 [2024-11-20 11:30:53.721044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.240 [2024-11-20 11:30:53.721059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.240 [2024-11-20 11:30:53.721066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.240 [2024-11-20 11:30:53.721073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.240 [2024-11-20 11:30:53.721092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.240 qpair failed and we were unable to recover it. 00:30:01.240 [2024-11-20 11:30:53.730929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.240 [2024-11-20 11:30:53.730976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.240 [2024-11-20 11:30:53.730990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.240 [2024-11-20 11:30:53.730997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.240 [2024-11-20 11:30:53.731004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.240 [2024-11-20 11:30:53.731019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.240 qpair failed and we were unable to recover it. 00:30:01.240 [2024-11-20 11:30:53.741098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.240 [2024-11-20 11:30:53.741157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.240 [2024-11-20 11:30:53.741173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.240 [2024-11-20 11:30:53.741181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.240 [2024-11-20 11:30:53.741187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.240 [2024-11-20 11:30:53.741202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.240 qpair failed and we were unable to recover it. 00:30:01.240 [2024-11-20 11:30:53.751040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.240 [2024-11-20 11:30:53.751099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.240 [2024-11-20 11:30:53.751112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.240 [2024-11-20 11:30:53.751119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.240 [2024-11-20 11:30:53.751126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.240 [2024-11-20 11:30:53.751140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.240 qpair failed and we were unable to recover it. 00:30:01.240 [2024-11-20 11:30:53.760962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.240 [2024-11-20 11:30:53.761017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.240 [2024-11-20 11:30:53.761031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.240 [2024-11-20 11:30:53.761038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.240 [2024-11-20 11:30:53.761045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.240 [2024-11-20 11:30:53.761059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.240 qpair failed and we were unable to recover it. 00:30:01.240 [2024-11-20 11:30:53.771058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.240 [2024-11-20 11:30:53.771109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.240 [2024-11-20 11:30:53.771122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.240 [2024-11-20 11:30:53.771129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.240 [2024-11-20 11:30:53.771135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.240 [2024-11-20 11:30:53.771149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.240 qpair failed and we were unable to recover it. 00:30:01.240 [2024-11-20 11:30:53.781175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.241 [2024-11-20 11:30:53.781251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.241 [2024-11-20 11:30:53.781264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.241 [2024-11-20 11:30:53.781272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.241 [2024-11-20 11:30:53.781278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.241 [2024-11-20 11:30:53.781292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.241 qpair failed and we were unable to recover it. 00:30:01.241 [2024-11-20 11:30:53.791121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.241 [2024-11-20 11:30:53.791226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.241 [2024-11-20 11:30:53.791241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.241 [2024-11-20 11:30:53.791248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.241 [2024-11-20 11:30:53.791255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.241 [2024-11-20 11:30:53.791270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.241 qpair failed and we were unable to recover it. 00:30:01.241 [2024-11-20 11:30:53.801179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.241 [2024-11-20 11:30:53.801232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.241 [2024-11-20 11:30:53.801248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.241 [2024-11-20 11:30:53.801256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.241 [2024-11-20 11:30:53.801263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.241 [2024-11-20 11:30:53.801279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.241 qpair failed and we were unable to recover it. 00:30:01.241 [2024-11-20 11:30:53.811185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.241 [2024-11-20 11:30:53.811231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.241 [2024-11-20 11:30:53.811244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.241 [2024-11-20 11:30:53.811251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.241 [2024-11-20 11:30:53.811258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.241 [2024-11-20 11:30:53.811272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.241 qpair failed and we were unable to recover it. 00:30:01.241 [2024-11-20 11:30:53.821269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.241 [2024-11-20 11:30:53.821360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.241 [2024-11-20 11:30:53.821373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.241 [2024-11-20 11:30:53.821380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.241 [2024-11-20 11:30:53.821388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.241 [2024-11-20 11:30:53.821402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.241 qpair failed and we were unable to recover it. 00:30:01.241 [2024-11-20 11:30:53.831262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.241 [2024-11-20 11:30:53.831315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.241 [2024-11-20 11:30:53.831328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.241 [2024-11-20 11:30:53.831335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.241 [2024-11-20 11:30:53.831342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.241 [2024-11-20 11:30:53.831357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.241 qpair failed and we were unable to recover it. 00:30:01.241 [2024-11-20 11:30:53.841329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.241 [2024-11-20 11:30:53.841384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.241 [2024-11-20 11:30:53.841398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.241 [2024-11-20 11:30:53.841405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.241 [2024-11-20 11:30:53.841416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.241 [2024-11-20 11:30:53.841431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.241 qpair failed and we were unable to recover it. 00:30:01.241 [2024-11-20 11:30:53.851343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.241 [2024-11-20 11:30:53.851395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.241 [2024-11-20 11:30:53.851409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.241 [2024-11-20 11:30:53.851416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.241 [2024-11-20 11:30:53.851422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.241 [2024-11-20 11:30:53.851436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.241 qpair failed and we were unable to recover it. 00:30:01.241 [2024-11-20 11:30:53.861293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.241 [2024-11-20 11:30:53.861348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.241 [2024-11-20 11:30:53.861361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.241 [2024-11-20 11:30:53.861368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.241 [2024-11-20 11:30:53.861375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.241 [2024-11-20 11:30:53.861390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.241 qpair failed and we were unable to recover it. 00:30:01.241 [2024-11-20 11:30:53.871296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.241 [2024-11-20 11:30:53.871347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.241 [2024-11-20 11:30:53.871360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.241 [2024-11-20 11:30:53.871367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.241 [2024-11-20 11:30:53.871374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.241 [2024-11-20 11:30:53.871388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.241 qpair failed and we were unable to recover it. 00:30:01.241 [2024-11-20 11:30:53.881445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.241 [2024-11-20 11:30:53.881496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.241 [2024-11-20 11:30:53.881510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.241 [2024-11-20 11:30:53.881518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.241 [2024-11-20 11:30:53.881525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.241 [2024-11-20 11:30:53.881539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.241 qpair failed and we were unable to recover it. 00:30:01.241 [2024-11-20 11:30:53.891409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.241 [2024-11-20 11:30:53.891454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.241 [2024-11-20 11:30:53.891467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.241 [2024-11-20 11:30:53.891474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.241 [2024-11-20 11:30:53.891481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.241 [2024-11-20 11:30:53.891495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.241 qpair failed and we were unable to recover it. 00:30:01.241 [2024-11-20 11:30:53.901484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.241 [2024-11-20 11:30:53.901551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.241 [2024-11-20 11:30:53.901564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.242 [2024-11-20 11:30:53.901572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.242 [2024-11-20 11:30:53.901578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.242 [2024-11-20 11:30:53.901592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.242 qpair failed and we were unable to recover it. 00:30:01.242 [2024-11-20 11:30:53.911501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.242 [2024-11-20 11:30:53.911555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.242 [2024-11-20 11:30:53.911568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.242 [2024-11-20 11:30:53.911576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.242 [2024-11-20 11:30:53.911582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.242 [2024-11-20 11:30:53.911597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.242 qpair failed and we were unable to recover it. 00:30:01.242 [2024-11-20 11:30:53.921557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.242 [2024-11-20 11:30:53.921658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.242 [2024-11-20 11:30:53.921671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.242 [2024-11-20 11:30:53.921678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.242 [2024-11-20 11:30:53.921685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.242 [2024-11-20 11:30:53.921700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.242 qpair failed and we were unable to recover it. 00:30:01.242 [2024-11-20 11:30:53.931525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.242 [2024-11-20 11:30:53.931622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.242 [2024-11-20 11:30:53.931640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.242 [2024-11-20 11:30:53.931647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.242 [2024-11-20 11:30:53.931654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.242 [2024-11-20 11:30:53.931669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.242 qpair failed and we were unable to recover it. 00:30:01.242 [2024-11-20 11:30:53.941605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.242 [2024-11-20 11:30:53.941661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.242 [2024-11-20 11:30:53.941674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.242 [2024-11-20 11:30:53.941681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.242 [2024-11-20 11:30:53.941687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.242 [2024-11-20 11:30:53.941702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.242 qpair failed and we were unable to recover it. 00:30:01.242 [2024-11-20 11:30:53.951597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.242 [2024-11-20 11:30:53.951646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.242 [2024-11-20 11:30:53.951659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.242 [2024-11-20 11:30:53.951666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.242 [2024-11-20 11:30:53.951673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.242 [2024-11-20 11:30:53.951687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.242 qpair failed and we were unable to recover it. 00:30:01.242 [2024-11-20 11:30:53.961681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.242 [2024-11-20 11:30:53.961729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.242 [2024-11-20 11:30:53.961742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.242 [2024-11-20 11:30:53.961750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.242 [2024-11-20 11:30:53.961756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.242 [2024-11-20 11:30:53.961771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.242 qpair failed and we were unable to recover it. 00:30:01.242 [2024-11-20 11:30:53.971644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.242 [2024-11-20 11:30:53.971694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.242 [2024-11-20 11:30:53.971706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.242 [2024-11-20 11:30:53.971714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.242 [2024-11-20 11:30:53.971723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.242 [2024-11-20 11:30:53.971738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.242 qpair failed and we were unable to recover it. 00:30:01.505 [2024-11-20 11:30:53.981714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.505 [2024-11-20 11:30:53.981766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.505 [2024-11-20 11:30:53.981779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.505 [2024-11-20 11:30:53.981787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.505 [2024-11-20 11:30:53.981794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.505 [2024-11-20 11:30:53.981808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.505 qpair failed and we were unable to recover it. 00:30:01.505 [2024-11-20 11:30:53.991702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.505 [2024-11-20 11:30:53.991754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.505 [2024-11-20 11:30:53.991767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.505 [2024-11-20 11:30:53.991775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.505 [2024-11-20 11:30:53.991781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.505 [2024-11-20 11:30:53.991795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.505 qpair failed and we were unable to recover it. 00:30:01.505 [2024-11-20 11:30:54.001764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.505 [2024-11-20 11:30:54.001818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.505 [2024-11-20 11:30:54.001832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.505 [2024-11-20 11:30:54.001839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.505 [2024-11-20 11:30:54.001845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.505 [2024-11-20 11:30:54.001859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.505 qpair failed and we were unable to recover it. 00:30:01.505 [2024-11-20 11:30:54.011798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.505 [2024-11-20 11:30:54.011877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.505 [2024-11-20 11:30:54.011891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.505 [2024-11-20 11:30:54.011899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.505 [2024-11-20 11:30:54.011906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.505 [2024-11-20 11:30:54.011921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.505 qpair failed and we were unable to recover it. 00:30:01.505 [2024-11-20 11:30:54.021821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.505 [2024-11-20 11:30:54.021884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.505 [2024-11-20 11:30:54.021907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.505 [2024-11-20 11:30:54.021916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.505 [2024-11-20 11:30:54.021923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.505 [2024-11-20 11:30:54.021943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.505 qpair failed and we were unable to recover it. 00:30:01.505 [2024-11-20 11:30:54.031831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.505 [2024-11-20 11:30:54.031882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.505 [2024-11-20 11:30:54.031897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.505 [2024-11-20 11:30:54.031904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.505 [2024-11-20 11:30:54.031911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.505 [2024-11-20 11:30:54.031927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.505 qpair failed and we were unable to recover it. 00:30:01.505 [2024-11-20 11:30:54.041884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.505 [2024-11-20 11:30:54.041939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.505 [2024-11-20 11:30:54.041952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.505 [2024-11-20 11:30:54.041960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.505 [2024-11-20 11:30:54.041966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.505 [2024-11-20 11:30:54.041981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.505 qpair failed and we were unable to recover it. 00:30:01.505 [2024-11-20 11:30:54.051855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.505 [2024-11-20 11:30:54.051903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.505 [2024-11-20 11:30:54.051916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.505 [2024-11-20 11:30:54.051923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.505 [2024-11-20 11:30:54.051930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.505 [2024-11-20 11:30:54.051944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.505 qpair failed and we were unable to recover it. 00:30:01.505 [2024-11-20 11:30:54.061934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.505 [2024-11-20 11:30:54.061987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.505 [2024-11-20 11:30:54.062005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.505 [2024-11-20 11:30:54.062013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.505 [2024-11-20 11:30:54.062019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.505 [2024-11-20 11:30:54.062034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.505 qpair failed and we were unable to recover it. 00:30:01.506 [2024-11-20 11:30:54.071917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.506 [2024-11-20 11:30:54.071962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.506 [2024-11-20 11:30:54.071975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.506 [2024-11-20 11:30:54.071982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.506 [2024-11-20 11:30:54.071988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.506 [2024-11-20 11:30:54.072003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.506 qpair failed and we were unable to recover it. 00:30:01.506 [2024-11-20 11:30:54.081986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.506 [2024-11-20 11:30:54.082035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.506 [2024-11-20 11:30:54.082048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.506 [2024-11-20 11:30:54.082055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.506 [2024-11-20 11:30:54.082061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.506 [2024-11-20 11:30:54.082075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.506 qpair failed and we were unable to recover it. 00:30:01.506 [2024-11-20 11:30:54.091982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.506 [2024-11-20 11:30:54.092037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.506 [2024-11-20 11:30:54.092050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.506 [2024-11-20 11:30:54.092058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.506 [2024-11-20 11:30:54.092064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.506 [2024-11-20 11:30:54.092078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.506 qpair failed and we were unable to recover it. 00:30:01.506 [2024-11-20 11:30:54.102094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.506 [2024-11-20 11:30:54.102174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.506 [2024-11-20 11:30:54.102187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.506 [2024-11-20 11:30:54.102198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.506 [2024-11-20 11:30:54.102205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.506 [2024-11-20 11:30:54.102221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.506 qpair failed and we were unable to recover it. 00:30:01.506 [2024-11-20 11:30:54.112049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.506 [2024-11-20 11:30:54.112099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.506 [2024-11-20 11:30:54.112112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.506 [2024-11-20 11:30:54.112120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.506 [2024-11-20 11:30:54.112126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.506 [2024-11-20 11:30:54.112141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.506 qpair failed and we were unable to recover it. 00:30:01.506 [2024-11-20 11:30:54.122103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.506 [2024-11-20 11:30:54.122154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.506 [2024-11-20 11:30:54.122171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.506 [2024-11-20 11:30:54.122178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.506 [2024-11-20 11:30:54.122185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.506 [2024-11-20 11:30:54.122199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.506 qpair failed and we were unable to recover it. 00:30:01.506 [2024-11-20 11:30:54.132073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.506 [2024-11-20 11:30:54.132122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.506 [2024-11-20 11:30:54.132135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.506 [2024-11-20 11:30:54.132142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.506 [2024-11-20 11:30:54.132149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.506 [2024-11-20 11:30:54.132168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.506 qpair failed and we were unable to recover it. 00:30:01.506 [2024-11-20 11:30:54.142091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.506 [2024-11-20 11:30:54.142147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.506 [2024-11-20 11:30:54.142164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.506 [2024-11-20 11:30:54.142172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.506 [2024-11-20 11:30:54.142178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.506 [2024-11-20 11:30:54.142197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.506 qpair failed and we were unable to recover it. 00:30:01.506 [2024-11-20 11:30:54.152162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.506 [2024-11-20 11:30:54.152216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.506 [2024-11-20 11:30:54.152230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.506 [2024-11-20 11:30:54.152237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.506 [2024-11-20 11:30:54.152244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.506 [2024-11-20 11:30:54.152259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.506 qpair failed and we were unable to recover it. 00:30:01.506 [2024-11-20 11:30:54.162229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.506 [2024-11-20 11:30:54.162283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.506 [2024-11-20 11:30:54.162296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.506 [2024-11-20 11:30:54.162304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.506 [2024-11-20 11:30:54.162311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.506 [2024-11-20 11:30:54.162325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.506 qpair failed and we were unable to recover it. 00:30:01.506 [2024-11-20 11:30:54.172208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.506 [2024-11-20 11:30:54.172258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.506 [2024-11-20 11:30:54.172270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.506 [2024-11-20 11:30:54.172278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.506 [2024-11-20 11:30:54.172284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.506 [2024-11-20 11:30:54.172299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.506 qpair failed and we were unable to recover it. 00:30:01.506 [2024-11-20 11:30:54.182268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.506 [2024-11-20 11:30:54.182322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.506 [2024-11-20 11:30:54.182336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.506 [2024-11-20 11:30:54.182343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.507 [2024-11-20 11:30:54.182350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.507 [2024-11-20 11:30:54.182364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.507 qpair failed and we were unable to recover it. 00:30:01.507 [2024-11-20 11:30:54.192241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.507 [2024-11-20 11:30:54.192296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.507 [2024-11-20 11:30:54.192310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.507 [2024-11-20 11:30:54.192317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.507 [2024-11-20 11:30:54.192323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.507 [2024-11-20 11:30:54.192338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.507 qpair failed and we were unable to recover it. 00:30:01.507 [2024-11-20 11:30:54.202356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.507 [2024-11-20 11:30:54.202411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.507 [2024-11-20 11:30:54.202425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.507 [2024-11-20 11:30:54.202432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.507 [2024-11-20 11:30:54.202439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.507 [2024-11-20 11:30:54.202453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.507 qpair failed and we were unable to recover it. 00:30:01.507 [2024-11-20 11:30:54.212311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.507 [2024-11-20 11:30:54.212365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.507 [2024-11-20 11:30:54.212378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.507 [2024-11-20 11:30:54.212386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.507 [2024-11-20 11:30:54.212392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.507 [2024-11-20 11:30:54.212406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.507 qpair failed and we were unable to recover it. 00:30:01.507 [2024-11-20 11:30:54.222388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.507 [2024-11-20 11:30:54.222443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.507 [2024-11-20 11:30:54.222456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.507 [2024-11-20 11:30:54.222463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.507 [2024-11-20 11:30:54.222469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.507 [2024-11-20 11:30:54.222483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.507 qpair failed and we were unable to recover it. 00:30:01.507 [2024-11-20 11:30:54.232385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.507 [2024-11-20 11:30:54.232436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.507 [2024-11-20 11:30:54.232449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.507 [2024-11-20 11:30:54.232459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.507 [2024-11-20 11:30:54.232466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.507 [2024-11-20 11:30:54.232480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.507 qpair failed and we were unable to recover it. 00:30:01.507 [2024-11-20 11:30:54.242302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.507 [2024-11-20 11:30:54.242356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.507 [2024-11-20 11:30:54.242371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.507 [2024-11-20 11:30:54.242378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.507 [2024-11-20 11:30:54.242385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.507 [2024-11-20 11:30:54.242400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.507 qpair failed and we were unable to recover it. 00:30:01.769 [2024-11-20 11:30:54.252391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.769 [2024-11-20 11:30:54.252446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.769 [2024-11-20 11:30:54.252459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.769 [2024-11-20 11:30:54.252467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.769 [2024-11-20 11:30:54.252474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.769 [2024-11-20 11:30:54.252489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.769 qpair failed and we were unable to recover it. 00:30:01.769 [2024-11-20 11:30:54.262477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.769 [2024-11-20 11:30:54.262581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.769 [2024-11-20 11:30:54.262595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.769 [2024-11-20 11:30:54.262602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.769 [2024-11-20 11:30:54.262609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.769 [2024-11-20 11:30:54.262623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.769 qpair failed and we were unable to recover it. 00:30:01.769 [2024-11-20 11:30:54.272443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.769 [2024-11-20 11:30:54.272491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.769 [2024-11-20 11:30:54.272504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.769 [2024-11-20 11:30:54.272511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.769 [2024-11-20 11:30:54.272518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.769 [2024-11-20 11:30:54.272539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.769 qpair failed and we were unable to recover it. 00:30:01.769 [2024-11-20 11:30:54.282511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.769 [2024-11-20 11:30:54.282570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.769 [2024-11-20 11:30:54.282583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.769 [2024-11-20 11:30:54.282590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.769 [2024-11-20 11:30:54.282597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.769 [2024-11-20 11:30:54.282611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.769 qpair failed and we were unable to recover it. 00:30:01.769 [2024-11-20 11:30:54.292470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.769 [2024-11-20 11:30:54.292525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.770 [2024-11-20 11:30:54.292539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.770 [2024-11-20 11:30:54.292546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.770 [2024-11-20 11:30:54.292553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.770 [2024-11-20 11:30:54.292567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.770 qpair failed and we were unable to recover it. 00:30:01.770 [2024-11-20 11:30:54.302587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.770 [2024-11-20 11:30:54.302644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.770 [2024-11-20 11:30:54.302657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.770 [2024-11-20 11:30:54.302665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.770 [2024-11-20 11:30:54.302672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.770 [2024-11-20 11:30:54.302686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.770 qpair failed and we were unable to recover it. 00:30:01.770 [2024-11-20 11:30:54.312532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.770 [2024-11-20 11:30:54.312583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.770 [2024-11-20 11:30:54.312596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.770 [2024-11-20 11:30:54.312603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.770 [2024-11-20 11:30:54.312610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.770 [2024-11-20 11:30:54.312624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.770 qpair failed and we were unable to recover it. 00:30:01.770 [2024-11-20 11:30:54.322595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.770 [2024-11-20 11:30:54.322648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.770 [2024-11-20 11:30:54.322662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.770 [2024-11-20 11:30:54.322669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.770 [2024-11-20 11:30:54.322676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.770 [2024-11-20 11:30:54.322690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.770 qpair failed and we were unable to recover it. 00:30:01.770 [2024-11-20 11:30:54.332597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.770 [2024-11-20 11:30:54.332659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.770 [2024-11-20 11:30:54.332672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.770 [2024-11-20 11:30:54.332679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.770 [2024-11-20 11:30:54.332686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.770 [2024-11-20 11:30:54.332700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.770 qpair failed and we were unable to recover it. 00:30:01.770 [2024-11-20 11:30:54.342613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.770 [2024-11-20 11:30:54.342670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.770 [2024-11-20 11:30:54.342682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.770 [2024-11-20 11:30:54.342690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.770 [2024-11-20 11:30:54.342696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.770 [2024-11-20 11:30:54.342710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.770 qpair failed and we were unable to recover it. 00:30:01.770 [2024-11-20 11:30:54.352675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.770 [2024-11-20 11:30:54.352725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.770 [2024-11-20 11:30:54.352738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.770 [2024-11-20 11:30:54.352745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.770 [2024-11-20 11:30:54.352752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.770 [2024-11-20 11:30:54.352766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.770 qpair failed and we were unable to recover it. 00:30:01.770 [2024-11-20 11:30:54.362702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.770 [2024-11-20 11:30:54.362753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.770 [2024-11-20 11:30:54.362770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.770 [2024-11-20 11:30:54.362778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.770 [2024-11-20 11:30:54.362784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.770 [2024-11-20 11:30:54.362798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.770 qpair failed and we were unable to recover it. 00:30:01.770 [2024-11-20 11:30:54.372725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.770 [2024-11-20 11:30:54.372816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.770 [2024-11-20 11:30:54.372829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.770 [2024-11-20 11:30:54.372837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.770 [2024-11-20 11:30:54.372844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.770 [2024-11-20 11:30:54.372858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.770 qpair failed and we were unable to recover it. 00:30:01.770 [2024-11-20 11:30:54.382763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.770 [2024-11-20 11:30:54.382860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.770 [2024-11-20 11:30:54.382874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.770 [2024-11-20 11:30:54.382881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.770 [2024-11-20 11:30:54.382888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.770 [2024-11-20 11:30:54.382903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.770 qpair failed and we were unable to recover it. 00:30:01.770 [2024-11-20 11:30:54.392785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.770 [2024-11-20 11:30:54.392838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.770 [2024-11-20 11:30:54.392851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.770 [2024-11-20 11:30:54.392859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.770 [2024-11-20 11:30:54.392865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.770 [2024-11-20 11:30:54.392879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.770 qpair failed and we were unable to recover it. 00:30:01.770 [2024-11-20 11:30:54.402818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.770 [2024-11-20 11:30:54.402902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.771 [2024-11-20 11:30:54.402915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.771 [2024-11-20 11:30:54.402922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.771 [2024-11-20 11:30:54.402933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.771 [2024-11-20 11:30:54.402948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.771 qpair failed and we were unable to recover it. 00:30:01.771 [2024-11-20 11:30:54.412739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.771 [2024-11-20 11:30:54.412792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.771 [2024-11-20 11:30:54.412807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.771 [2024-11-20 11:30:54.412814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.771 [2024-11-20 11:30:54.412820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.771 [2024-11-20 11:30:54.412835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.771 qpair failed and we were unable to recover it. 00:30:01.771 [2024-11-20 11:30:54.422898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.771 [2024-11-20 11:30:54.422949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.771 [2024-11-20 11:30:54.422962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.771 [2024-11-20 11:30:54.422970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.771 [2024-11-20 11:30:54.422976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.771 [2024-11-20 11:30:54.422991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.771 qpair failed and we were unable to recover it. 00:30:01.771 [2024-11-20 11:30:54.432813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.771 [2024-11-20 11:30:54.432873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.771 [2024-11-20 11:30:54.432887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.771 [2024-11-20 11:30:54.432895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.771 [2024-11-20 11:30:54.432902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.771 [2024-11-20 11:30:54.432920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.771 qpair failed and we were unable to recover it. 00:30:01.771 [2024-11-20 11:30:54.442964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.771 [2024-11-20 11:30:54.443016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.771 [2024-11-20 11:30:54.443030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.771 [2024-11-20 11:30:54.443038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.771 [2024-11-20 11:30:54.443044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.771 [2024-11-20 11:30:54.443059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.771 qpair failed and we were unable to recover it. 00:30:01.771 [2024-11-20 11:30:54.452937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.771 [2024-11-20 11:30:54.453024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.771 [2024-11-20 11:30:54.453037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.771 [2024-11-20 11:30:54.453045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.771 [2024-11-20 11:30:54.453052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.771 [2024-11-20 11:30:54.453067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.771 qpair failed and we were unable to recover it. 00:30:01.771 [2024-11-20 11:30:54.462925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.771 [2024-11-20 11:30:54.462989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.771 [2024-11-20 11:30:54.463002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.771 [2024-11-20 11:30:54.463009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.771 [2024-11-20 11:30:54.463016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.771 [2024-11-20 11:30:54.463030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.771 qpair failed and we were unable to recover it. 00:30:01.771 [2024-11-20 11:30:54.473014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.771 [2024-11-20 11:30:54.473066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.771 [2024-11-20 11:30:54.473079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.771 [2024-11-20 11:30:54.473086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.771 [2024-11-20 11:30:54.473093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.771 [2024-11-20 11:30:54.473107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.771 qpair failed and we were unable to recover it. 00:30:01.771 [2024-11-20 11:30:54.483098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.771 [2024-11-20 11:30:54.483180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.771 [2024-11-20 11:30:54.483193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.771 [2024-11-20 11:30:54.483200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.771 [2024-11-20 11:30:54.483208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.771 [2024-11-20 11:30:54.483223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.771 qpair failed and we were unable to recover it. 00:30:01.771 [2024-11-20 11:30:54.493062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.771 [2024-11-20 11:30:54.493109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.771 [2024-11-20 11:30:54.493126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.771 [2024-11-20 11:30:54.493134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.771 [2024-11-20 11:30:54.493140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.771 [2024-11-20 11:30:54.493155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.771 qpair failed and we were unable to recover it. 00:30:01.771 [2024-11-20 11:30:54.503137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.771 [2024-11-20 11:30:54.503243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.771 [2024-11-20 11:30:54.503257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.771 [2024-11-20 11:30:54.503264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.771 [2024-11-20 11:30:54.503270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:01.771 [2024-11-20 11:30:54.503285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.771 qpair failed and we were unable to recover it. 00:30:02.033 [2024-11-20 11:30:54.513106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.033 [2024-11-20 11:30:54.513200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.033 [2024-11-20 11:30:54.513213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.033 [2024-11-20 11:30:54.513220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.033 [2024-11-20 11:30:54.513227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.033 [2024-11-20 11:30:54.513242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-11-20 11:30:54.523166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.033 [2024-11-20 11:30:54.523219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.033 [2024-11-20 11:30:54.523232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.033 [2024-11-20 11:30:54.523240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.033 [2024-11-20 11:30:54.523246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.033 [2024-11-20 11:30:54.523261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-11-20 11:30:54.533161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.033 [2024-11-20 11:30:54.533210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.033 [2024-11-20 11:30:54.533223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.033 [2024-11-20 11:30:54.533230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.033 [2024-11-20 11:30:54.533240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.033 [2024-11-20 11:30:54.533255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-11-20 11:30:54.543228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.033 [2024-11-20 11:30:54.543283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.033 [2024-11-20 11:30:54.543298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.033 [2024-11-20 11:30:54.543305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.033 [2024-11-20 11:30:54.543316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.033 [2024-11-20 11:30:54.543331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-11-20 11:30:54.553224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.033 [2024-11-20 11:30:54.553302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.033 [2024-11-20 11:30:54.553318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.033 [2024-11-20 11:30:54.553326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.033 [2024-11-20 11:30:54.553333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.033 [2024-11-20 11:30:54.553347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-11-20 11:30:54.563259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.033 [2024-11-20 11:30:54.563324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.033 [2024-11-20 11:30:54.563336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.033 [2024-11-20 11:30:54.563344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.033 [2024-11-20 11:30:54.563351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.033 [2024-11-20 11:30:54.563365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-11-20 11:30:54.573250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.033 [2024-11-20 11:30:54.573296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.033 [2024-11-20 11:30:54.573309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.033 [2024-11-20 11:30:54.573317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.033 [2024-11-20 11:30:54.573323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.033 [2024-11-20 11:30:54.573338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-11-20 11:30:54.583344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.033 [2024-11-20 11:30:54.583397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.033 [2024-11-20 11:30:54.583410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.033 [2024-11-20 11:30:54.583418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.033 [2024-11-20 11:30:54.583424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.033 [2024-11-20 11:30:54.583439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-11-20 11:30:54.593346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.033 [2024-11-20 11:30:54.593396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.033 [2024-11-20 11:30:54.593409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.033 [2024-11-20 11:30:54.593416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.033 [2024-11-20 11:30:54.593423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.033 [2024-11-20 11:30:54.593437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.033 qpair failed and we were unable to recover it. 00:30:02.033 [2024-11-20 11:30:54.603413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.033 [2024-11-20 11:30:54.603471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.033 [2024-11-20 11:30:54.603484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.033 [2024-11-20 11:30:54.603491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.034 [2024-11-20 11:30:54.603498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.034 [2024-11-20 11:30:54.603512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-11-20 11:30:54.613361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.034 [2024-11-20 11:30:54.613417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.034 [2024-11-20 11:30:54.613431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.034 [2024-11-20 11:30:54.613438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.034 [2024-11-20 11:30:54.613444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.034 [2024-11-20 11:30:54.613458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-11-20 11:30:54.623333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.034 [2024-11-20 11:30:54.623390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.034 [2024-11-20 11:30:54.623407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.034 [2024-11-20 11:30:54.623414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.034 [2024-11-20 11:30:54.623421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.034 [2024-11-20 11:30:54.623435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-11-20 11:30:54.633468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.034 [2024-11-20 11:30:54.633556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.034 [2024-11-20 11:30:54.633569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.034 [2024-11-20 11:30:54.633576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.034 [2024-11-20 11:30:54.633582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.034 [2024-11-20 11:30:54.633597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-11-20 11:30:54.643486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.034 [2024-11-20 11:30:54.643555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.034 [2024-11-20 11:30:54.643567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.034 [2024-11-20 11:30:54.643575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.034 [2024-11-20 11:30:54.643582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.034 [2024-11-20 11:30:54.643596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-11-20 11:30:54.653522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.034 [2024-11-20 11:30:54.653566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.034 [2024-11-20 11:30:54.653580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.034 [2024-11-20 11:30:54.653587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.034 [2024-11-20 11:30:54.653594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.034 [2024-11-20 11:30:54.653608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-11-20 11:30:54.663440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.034 [2024-11-20 11:30:54.663494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.034 [2024-11-20 11:30:54.663507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.034 [2024-11-20 11:30:54.663517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.034 [2024-11-20 11:30:54.663523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.034 [2024-11-20 11:30:54.663537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-11-20 11:30:54.673611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.034 [2024-11-20 11:30:54.673692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.034 [2024-11-20 11:30:54.673705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.034 [2024-11-20 11:30:54.673712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.034 [2024-11-20 11:30:54.673720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.034 [2024-11-20 11:30:54.673734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-11-20 11:30:54.683569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.034 [2024-11-20 11:30:54.683637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.034 [2024-11-20 11:30:54.683650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.034 [2024-11-20 11:30:54.683657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.034 [2024-11-20 11:30:54.683664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.034 [2024-11-20 11:30:54.683678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-11-20 11:30:54.693635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.034 [2024-11-20 11:30:54.693711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.034 [2024-11-20 11:30:54.693724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.034 [2024-11-20 11:30:54.693732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.034 [2024-11-20 11:30:54.693739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.034 [2024-11-20 11:30:54.693753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-11-20 11:30:54.703636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.034 [2024-11-20 11:30:54.703690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.034 [2024-11-20 11:30:54.703703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.034 [2024-11-20 11:30:54.703710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.034 [2024-11-20 11:30:54.703717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.034 [2024-11-20 11:30:54.703735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-11-20 11:30:54.713673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.034 [2024-11-20 11:30:54.713723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.034 [2024-11-20 11:30:54.713736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.034 [2024-11-20 11:30:54.713743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.034 [2024-11-20 11:30:54.713750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.034 [2024-11-20 11:30:54.713764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-11-20 11:30:54.723679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.034 [2024-11-20 11:30:54.723731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.034 [2024-11-20 11:30:54.723744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.034 [2024-11-20 11:30:54.723751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.034 [2024-11-20 11:30:54.723758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.034 [2024-11-20 11:30:54.723772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.034 qpair failed and we were unable to recover it. 00:30:02.034 [2024-11-20 11:30:54.733620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.034 [2024-11-20 11:30:54.733701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.034 [2024-11-20 11:30:54.733715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.034 [2024-11-20 11:30:54.733723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.035 [2024-11-20 11:30:54.733731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.035 [2024-11-20 11:30:54.733749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.035 qpair failed and we were unable to recover it. 00:30:02.035 [2024-11-20 11:30:54.743783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.035 [2024-11-20 11:30:54.743835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.035 [2024-11-20 11:30:54.743849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.035 [2024-11-20 11:30:54.743856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.035 [2024-11-20 11:30:54.743863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.035 [2024-11-20 11:30:54.743877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.035 qpair failed and we were unable to recover it. 00:30:02.035 [2024-11-20 11:30:54.753761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.035 [2024-11-20 11:30:54.753815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.035 [2024-11-20 11:30:54.753828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.035 [2024-11-20 11:30:54.753836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.035 [2024-11-20 11:30:54.753842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.035 [2024-11-20 11:30:54.753856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.035 qpair failed and we were unable to recover it. 00:30:02.035 [2024-11-20 11:30:54.763777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.035 [2024-11-20 11:30:54.763827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.035 [2024-11-20 11:30:54.763840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.035 [2024-11-20 11:30:54.763848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.035 [2024-11-20 11:30:54.763854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.035 [2024-11-20 11:30:54.763869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.035 qpair failed and we were unable to recover it. 00:30:02.296 [2024-11-20 11:30:54.773805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.296 [2024-11-20 11:30:54.773850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.296 [2024-11-20 11:30:54.773863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.296 [2024-11-20 11:30:54.773871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.296 [2024-11-20 11:30:54.773877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.296 [2024-11-20 11:30:54.773891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.296 qpair failed and we were unable to recover it. 00:30:02.296 [2024-11-20 11:30:54.783887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.296 [2024-11-20 11:30:54.783999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.296 [2024-11-20 11:30:54.784012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.296 [2024-11-20 11:30:54.784020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.296 [2024-11-20 11:30:54.784028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.296 [2024-11-20 11:30:54.784041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.296 qpair failed and we were unable to recover it. 00:30:02.296 [2024-11-20 11:30:54.793889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.296 [2024-11-20 11:30:54.793938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.296 [2024-11-20 11:30:54.793952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.296 [2024-11-20 11:30:54.793963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.296 [2024-11-20 11:30:54.793970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.296 [2024-11-20 11:30:54.793984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.296 qpair failed and we were unable to recover it. 00:30:02.296 [2024-11-20 11:30:54.803898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.296 [2024-11-20 11:30:54.803948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.296 [2024-11-20 11:30:54.803961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.296 [2024-11-20 11:30:54.803968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.296 [2024-11-20 11:30:54.803975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.296 [2024-11-20 11:30:54.803989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.296 qpair failed and we were unable to recover it. 00:30:02.296 [2024-11-20 11:30:54.813941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.296 [2024-11-20 11:30:54.814028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.296 [2024-11-20 11:30:54.814041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.296 [2024-11-20 11:30:54.814049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.296 [2024-11-20 11:30:54.814056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.296 [2024-11-20 11:30:54.814070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.296 qpair failed and we were unable to recover it. 00:30:02.296 [2024-11-20 11:30:54.824000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.296 [2024-11-20 11:30:54.824053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.296 [2024-11-20 11:30:54.824066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.296 [2024-11-20 11:30:54.824073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.296 [2024-11-20 11:30:54.824080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.297 [2024-11-20 11:30:54.824094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.297 qpair failed and we were unable to recover it. 00:30:02.297 [2024-11-20 11:30:54.834007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.297 [2024-11-20 11:30:54.834059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.297 [2024-11-20 11:30:54.834072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.297 [2024-11-20 11:30:54.834080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.297 [2024-11-20 11:30:54.834087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.297 [2024-11-20 11:30:54.834104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.297 qpair failed and we were unable to recover it. 00:30:02.297 [2024-11-20 11:30:54.843965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.297 [2024-11-20 11:30:54.844015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.297 [2024-11-20 11:30:54.844028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.297 [2024-11-20 11:30:54.844036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.297 [2024-11-20 11:30:54.844042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.297 [2024-11-20 11:30:54.844057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.297 qpair failed and we were unable to recover it. 00:30:02.297 [2024-11-20 11:30:54.854039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.297 [2024-11-20 11:30:54.854087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.297 [2024-11-20 11:30:54.854100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.297 [2024-11-20 11:30:54.854108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.297 [2024-11-20 11:30:54.854115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.297 [2024-11-20 11:30:54.854129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.297 qpair failed and we were unable to recover it. 00:30:02.297 [2024-11-20 11:30:54.864097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.297 [2024-11-20 11:30:54.864152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.297 [2024-11-20 11:30:54.864170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.297 [2024-11-20 11:30:54.864177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.297 [2024-11-20 11:30:54.864184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.297 [2024-11-20 11:30:54.864198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.297 qpair failed and we were unable to recover it. 00:30:02.297 [2024-11-20 11:30:54.874115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.297 [2024-11-20 11:30:54.874172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.297 [2024-11-20 11:30:54.874186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.297 [2024-11-20 11:30:54.874193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.297 [2024-11-20 11:30:54.874200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.297 [2024-11-20 11:30:54.874214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.297 qpair failed and we were unable to recover it. 00:30:02.297 [2024-11-20 11:30:54.884089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.297 [2024-11-20 11:30:54.884139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.297 [2024-11-20 11:30:54.884152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.297 [2024-11-20 11:30:54.884164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.297 [2024-11-20 11:30:54.884171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.297 [2024-11-20 11:30:54.884186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.297 qpair failed and we were unable to recover it. 00:30:02.297 [2024-11-20 11:30:54.894134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.297 [2024-11-20 11:30:54.894215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.297 [2024-11-20 11:30:54.894229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.297 [2024-11-20 11:30:54.894236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.297 [2024-11-20 11:30:54.894243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.297 [2024-11-20 11:30:54.894258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.297 qpair failed and we were unable to recover it. 00:30:02.297 [2024-11-20 11:30:54.904146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.297 [2024-11-20 11:30:54.904243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.297 [2024-11-20 11:30:54.904256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.297 [2024-11-20 11:30:54.904264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.297 [2024-11-20 11:30:54.904271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.297 [2024-11-20 11:30:54.904285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.297 qpair failed and we were unable to recover it. 00:30:02.297 [2024-11-20 11:30:54.914201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.297 [2024-11-20 11:30:54.914248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.297 [2024-11-20 11:30:54.914261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.297 [2024-11-20 11:30:54.914268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.297 [2024-11-20 11:30:54.914275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.297 [2024-11-20 11:30:54.914289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.297 qpair failed and we were unable to recover it. 00:30:02.297 [2024-11-20 11:30:54.924221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.297 [2024-11-20 11:30:54.924267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.297 [2024-11-20 11:30:54.924283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.297 [2024-11-20 11:30:54.924290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.297 [2024-11-20 11:30:54.924297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.297 [2024-11-20 11:30:54.924311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.297 qpair failed and we were unable to recover it. 00:30:02.297 [2024-11-20 11:30:54.934251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.297 [2024-11-20 11:30:54.934301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.297 [2024-11-20 11:30:54.934314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.297 [2024-11-20 11:30:54.934321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.297 [2024-11-20 11:30:54.934328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.297 [2024-11-20 11:30:54.934343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.297 qpair failed and we were unable to recover it. 00:30:02.297 [2024-11-20 11:30:54.944212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.297 [2024-11-20 11:30:54.944264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.297 [2024-11-20 11:30:54.944277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.297 [2024-11-20 11:30:54.944285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.297 [2024-11-20 11:30:54.944291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.297 [2024-11-20 11:30:54.944306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.297 qpair failed and we were unable to recover it. 00:30:02.297 [2024-11-20 11:30:54.954293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.297 [2024-11-20 11:30:54.954346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.297 [2024-11-20 11:30:54.954358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.297 [2024-11-20 11:30:54.954366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.297 [2024-11-20 11:30:54.954373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.298 [2024-11-20 11:30:54.954387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.298 qpair failed and we were unable to recover it. 00:30:02.298 [2024-11-20 11:30:54.964353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.298 [2024-11-20 11:30:54.964422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.298 [2024-11-20 11:30:54.964435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.298 [2024-11-20 11:30:54.964443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.298 [2024-11-20 11:30:54.964457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.298 [2024-11-20 11:30:54.964472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.298 qpair failed and we were unable to recover it. 00:30:02.298 [2024-11-20 11:30:54.974340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.298 [2024-11-20 11:30:54.974389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.298 [2024-11-20 11:30:54.974402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.298 [2024-11-20 11:30:54.974410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.298 [2024-11-20 11:30:54.974417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.298 [2024-11-20 11:30:54.974431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.298 qpair failed and we were unable to recover it. 00:30:02.298 [2024-11-20 11:30:54.984458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.298 [2024-11-20 11:30:54.984514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.298 [2024-11-20 11:30:54.984527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.298 [2024-11-20 11:30:54.984534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.298 [2024-11-20 11:30:54.984541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.298 [2024-11-20 11:30:54.984555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.298 qpair failed and we were unable to recover it. 00:30:02.298 [2024-11-20 11:30:54.994404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.298 [2024-11-20 11:30:54.994454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.298 [2024-11-20 11:30:54.994468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.298 [2024-11-20 11:30:54.994475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.298 [2024-11-20 11:30:54.994482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.298 [2024-11-20 11:30:54.994496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.298 qpair failed and we were unable to recover it. 00:30:02.298 [2024-11-20 11:30:55.004430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.298 [2024-11-20 11:30:55.004479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.298 [2024-11-20 11:30:55.004492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.298 [2024-11-20 11:30:55.004499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.298 [2024-11-20 11:30:55.004506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.298 [2024-11-20 11:30:55.004520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.298 qpair failed and we were unable to recover it. 00:30:02.298 [2024-11-20 11:30:55.014477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.298 [2024-11-20 11:30:55.014527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.298 [2024-11-20 11:30:55.014540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.298 [2024-11-20 11:30:55.014547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.298 [2024-11-20 11:30:55.014554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.298 [2024-11-20 11:30:55.014567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.298 qpair failed and we were unable to recover it. 00:30:02.298 [2024-11-20 11:30:55.024462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.298 [2024-11-20 11:30:55.024517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.298 [2024-11-20 11:30:55.024531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.298 [2024-11-20 11:30:55.024538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.298 [2024-11-20 11:30:55.024544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.298 [2024-11-20 11:30:55.024558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.298 qpair failed and we were unable to recover it. 00:30:02.298 [2024-11-20 11:30:55.034526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.298 [2024-11-20 11:30:55.034576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.298 [2024-11-20 11:30:55.034589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.298 [2024-11-20 11:30:55.034596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.298 [2024-11-20 11:30:55.034603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.298 [2024-11-20 11:30:55.034617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.298 qpair failed and we were unable to recover it. 00:30:02.560 [2024-11-20 11:30:55.044539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.560 [2024-11-20 11:30:55.044588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.560 [2024-11-20 11:30:55.044601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.560 [2024-11-20 11:30:55.044608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.560 [2024-11-20 11:30:55.044615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.560 [2024-11-20 11:30:55.044629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.560 qpair failed and we were unable to recover it. 00:30:02.560 [2024-11-20 11:30:55.054553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.560 [2024-11-20 11:30:55.054600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.560 [2024-11-20 11:30:55.054616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.560 [2024-11-20 11:30:55.054624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.560 [2024-11-20 11:30:55.054631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.560 [2024-11-20 11:30:55.054645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.560 qpair failed and we were unable to recover it. 00:30:02.560 [2024-11-20 11:30:55.064640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.560 [2024-11-20 11:30:55.064691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.560 [2024-11-20 11:30:55.064704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.560 [2024-11-20 11:30:55.064711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.560 [2024-11-20 11:30:55.064717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.560 [2024-11-20 11:30:55.064732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.560 qpair failed and we were unable to recover it. 00:30:02.560 [2024-11-20 11:30:55.074635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.560 [2024-11-20 11:30:55.074693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.560 [2024-11-20 11:30:55.074706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.560 [2024-11-20 11:30:55.074713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.560 [2024-11-20 11:30:55.074720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.560 [2024-11-20 11:30:55.074734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.560 qpair failed and we were unable to recover it. 00:30:02.560 [2024-11-20 11:30:55.084613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.560 [2024-11-20 11:30:55.084662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.560 [2024-11-20 11:30:55.084675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.560 [2024-11-20 11:30:55.084682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.560 [2024-11-20 11:30:55.084688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.560 [2024-11-20 11:30:55.084702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.560 qpair failed and we were unable to recover it. 00:30:02.560 [2024-11-20 11:30:55.094689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.560 [2024-11-20 11:30:55.094734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.560 [2024-11-20 11:30:55.094747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.560 [2024-11-20 11:30:55.094754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.560 [2024-11-20 11:30:55.094765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.560 [2024-11-20 11:30:55.094779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.560 qpair failed and we were unable to recover it. 00:30:02.560 [2024-11-20 11:30:55.104798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.560 [2024-11-20 11:30:55.104866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.560 [2024-11-20 11:30:55.104879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.560 [2024-11-20 11:30:55.104887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.560 [2024-11-20 11:30:55.104894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.561 [2024-11-20 11:30:55.104908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.561 qpair failed and we were unable to recover it. 00:30:02.561 [2024-11-20 11:30:55.114744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.561 [2024-11-20 11:30:55.114797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.561 [2024-11-20 11:30:55.114810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.561 [2024-11-20 11:30:55.114818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.561 [2024-11-20 11:30:55.114824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.561 [2024-11-20 11:30:55.114838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.561 qpair failed and we were unable to recover it. 00:30:02.561 [2024-11-20 11:30:55.124760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.561 [2024-11-20 11:30:55.124811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.561 [2024-11-20 11:30:55.124824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.561 [2024-11-20 11:30:55.124831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.561 [2024-11-20 11:30:55.124840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.561 [2024-11-20 11:30:55.124854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.561 qpair failed and we were unable to recover it. 00:30:02.561 [2024-11-20 11:30:55.134797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.561 [2024-11-20 11:30:55.134843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.561 [2024-11-20 11:30:55.134856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.561 [2024-11-20 11:30:55.134863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.561 [2024-11-20 11:30:55.134870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.561 [2024-11-20 11:30:55.134884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.561 qpair failed and we were unable to recover it. 00:30:02.561 [2024-11-20 11:30:55.144884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.561 [2024-11-20 11:30:55.144974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.561 [2024-11-20 11:30:55.144987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.561 [2024-11-20 11:30:55.144995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.561 [2024-11-20 11:30:55.145002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.561 [2024-11-20 11:30:55.145016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.561 qpair failed and we were unable to recover it. 00:30:02.561 [2024-11-20 11:30:55.154866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.561 [2024-11-20 11:30:55.154919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.561 [2024-11-20 11:30:55.154943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.561 [2024-11-20 11:30:55.154952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.561 [2024-11-20 11:30:55.154959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.561 [2024-11-20 11:30:55.154979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.561 qpair failed and we were unable to recover it. 00:30:02.561 [2024-11-20 11:30:55.164872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.561 [2024-11-20 11:30:55.164940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.561 [2024-11-20 11:30:55.164954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.561 [2024-11-20 11:30:55.164962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.561 [2024-11-20 11:30:55.164969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.561 [2024-11-20 11:30:55.164984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.561 qpair failed and we were unable to recover it. 00:30:02.561 [2024-11-20 11:30:55.174859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.561 [2024-11-20 11:30:55.174915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.561 [2024-11-20 11:30:55.174953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.561 [2024-11-20 11:30:55.174963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.561 [2024-11-20 11:30:55.174970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.561 [2024-11-20 11:30:55.174991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.561 qpair failed and we were unable to recover it. 00:30:02.561 [2024-11-20 11:30:55.184988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.561 [2024-11-20 11:30:55.185075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.561 [2024-11-20 11:30:55.185103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.561 [2024-11-20 11:30:55.185112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.561 [2024-11-20 11:30:55.185119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.561 [2024-11-20 11:30:55.185139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.561 qpair failed and we were unable to recover it. 00:30:02.561 [2024-11-20 11:30:55.194976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.561 [2024-11-20 11:30:55.195032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.561 [2024-11-20 11:30:55.195047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.561 [2024-11-20 11:30:55.195054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.561 [2024-11-20 11:30:55.195061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.561 [2024-11-20 11:30:55.195076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.561 qpair failed and we were unable to recover it. 00:30:02.561 [2024-11-20 11:30:55.205004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.561 [2024-11-20 11:30:55.205063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.561 [2024-11-20 11:30:55.205078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.561 [2024-11-20 11:30:55.205086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.561 [2024-11-20 11:30:55.205093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.561 [2024-11-20 11:30:55.205110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.561 qpair failed and we were unable to recover it. 00:30:02.561 [2024-11-20 11:30:55.215006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.561 [2024-11-20 11:30:55.215060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.561 [2024-11-20 11:30:55.215074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.561 [2024-11-20 11:30:55.215081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.561 [2024-11-20 11:30:55.215088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.561 [2024-11-20 11:30:55.215102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.561 qpair failed and we were unable to recover it. 00:30:02.561 [2024-11-20 11:30:55.225068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.561 [2024-11-20 11:30:55.225122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.561 [2024-11-20 11:30:55.225136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.561 [2024-11-20 11:30:55.225146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.561 [2024-11-20 11:30:55.225153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.562 [2024-11-20 11:30:55.225172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.562 qpair failed and we were unable to recover it. 00:30:02.562 [2024-11-20 11:30:55.235060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.562 [2024-11-20 11:30:55.235112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.562 [2024-11-20 11:30:55.235125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.562 [2024-11-20 11:30:55.235132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.562 [2024-11-20 11:30:55.235139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.562 [2024-11-20 11:30:55.235153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.562 qpair failed and we were unable to recover it. 00:30:02.562 [2024-11-20 11:30:55.244957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.562 [2024-11-20 11:30:55.245007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.562 [2024-11-20 11:30:55.245020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.562 [2024-11-20 11:30:55.245027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.562 [2024-11-20 11:30:55.245034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.562 [2024-11-20 11:30:55.245048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.562 qpair failed and we were unable to recover it. 00:30:02.562 [2024-11-20 11:30:55.255114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.562 [2024-11-20 11:30:55.255166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.562 [2024-11-20 11:30:55.255180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.562 [2024-11-20 11:30:55.255187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.562 [2024-11-20 11:30:55.255194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.562 [2024-11-20 11:30:55.255208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.562 qpair failed and we were unable to recover it. 00:30:02.562 [2024-11-20 11:30:55.265200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.562 [2024-11-20 11:30:55.265255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.562 [2024-11-20 11:30:55.265268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.562 [2024-11-20 11:30:55.265276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.562 [2024-11-20 11:30:55.265283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.562 [2024-11-20 11:30:55.265300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.562 qpair failed and we were unable to recover it. 00:30:02.562 [2024-11-20 11:30:55.275175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.562 [2024-11-20 11:30:55.275229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.562 [2024-11-20 11:30:55.275242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.562 [2024-11-20 11:30:55.275249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.562 [2024-11-20 11:30:55.275255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.562 [2024-11-20 11:30:55.275270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.562 qpair failed and we were unable to recover it. 00:30:02.562 [2024-11-20 11:30:55.285192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.562 [2024-11-20 11:30:55.285277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.562 [2024-11-20 11:30:55.285290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.562 [2024-11-20 11:30:55.285298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.562 [2024-11-20 11:30:55.285305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.562 [2024-11-20 11:30:55.285319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.562 qpair failed and we were unable to recover it. 00:30:02.562 [2024-11-20 11:30:55.295231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.562 [2024-11-20 11:30:55.295277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.562 [2024-11-20 11:30:55.295290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.562 [2024-11-20 11:30:55.295298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.562 [2024-11-20 11:30:55.295304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.562 [2024-11-20 11:30:55.295318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.562 qpair failed and we were unable to recover it. 00:30:02.824 [2024-11-20 11:30:55.305302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.824 [2024-11-20 11:30:55.305376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.824 [2024-11-20 11:30:55.305389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.824 [2024-11-20 11:30:55.305396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.824 [2024-11-20 11:30:55.305403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.824 [2024-11-20 11:30:55.305418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.824 qpair failed and we were unable to recover it. 00:30:02.824 [2024-11-20 11:30:55.315318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.824 [2024-11-20 11:30:55.315379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.824 [2024-11-20 11:30:55.315393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.824 [2024-11-20 11:30:55.315400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.824 [2024-11-20 11:30:55.315407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.824 [2024-11-20 11:30:55.315421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.824 qpair failed and we were unable to recover it. 00:30:02.824 [2024-11-20 11:30:55.325297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.824 [2024-11-20 11:30:55.325347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.824 [2024-11-20 11:30:55.325360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.824 [2024-11-20 11:30:55.325367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.824 [2024-11-20 11:30:55.325373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.824 [2024-11-20 11:30:55.325388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.824 qpair failed and we were unable to recover it. 00:30:02.824 [2024-11-20 11:30:55.335321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.824 [2024-11-20 11:30:55.335368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.824 [2024-11-20 11:30:55.335382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.824 [2024-11-20 11:30:55.335389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.824 [2024-11-20 11:30:55.335396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.824 [2024-11-20 11:30:55.335410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.824 qpair failed and we were unable to recover it. 00:30:02.824 [2024-11-20 11:30:55.345396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.824 [2024-11-20 11:30:55.345453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.824 [2024-11-20 11:30:55.345466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.824 [2024-11-20 11:30:55.345473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.824 [2024-11-20 11:30:55.345479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.824 [2024-11-20 11:30:55.345494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.824 qpair failed and we were unable to recover it. 00:30:02.824 [2024-11-20 11:30:55.355400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.824 [2024-11-20 11:30:55.355457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.824 [2024-11-20 11:30:55.355470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.824 [2024-11-20 11:30:55.355480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.824 [2024-11-20 11:30:55.355487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.824 [2024-11-20 11:30:55.355501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.825 qpair failed and we were unable to recover it. 00:30:02.825 [2024-11-20 11:30:55.365439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.825 [2024-11-20 11:30:55.365492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.825 [2024-11-20 11:30:55.365505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.825 [2024-11-20 11:30:55.365512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.825 [2024-11-20 11:30:55.365519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.825 [2024-11-20 11:30:55.365533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.825 qpair failed and we were unable to recover it. 00:30:02.825 [2024-11-20 11:30:55.375438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.825 [2024-11-20 11:30:55.375486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.825 [2024-11-20 11:30:55.375499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.825 [2024-11-20 11:30:55.375506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.825 [2024-11-20 11:30:55.375513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.825 [2024-11-20 11:30:55.375527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.825 qpair failed and we were unable to recover it. 00:30:02.825 [2024-11-20 11:30:55.385505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.825 [2024-11-20 11:30:55.385586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.825 [2024-11-20 11:30:55.385599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.825 [2024-11-20 11:30:55.385607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.825 [2024-11-20 11:30:55.385614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.825 [2024-11-20 11:30:55.385627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.825 qpair failed and we were unable to recover it. 00:30:02.825 [2024-11-20 11:30:55.395505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.825 [2024-11-20 11:30:55.395557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.825 [2024-11-20 11:30:55.395569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.825 [2024-11-20 11:30:55.395576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.825 [2024-11-20 11:30:55.395583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.825 [2024-11-20 11:30:55.395600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.825 qpair failed and we were unable to recover it. 00:30:02.825 [2024-11-20 11:30:55.405504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.825 [2024-11-20 11:30:55.405557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.825 [2024-11-20 11:30:55.405570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.825 [2024-11-20 11:30:55.405577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.825 [2024-11-20 11:30:55.405584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.825 [2024-11-20 11:30:55.405598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.825 qpair failed and we were unable to recover it. 00:30:02.825 [2024-11-20 11:30:55.415538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.825 [2024-11-20 11:30:55.415598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.825 [2024-11-20 11:30:55.415611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.825 [2024-11-20 11:30:55.415618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.825 [2024-11-20 11:30:55.415625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.825 [2024-11-20 11:30:55.415639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.825 qpair failed and we were unable to recover it. 00:30:02.825 [2024-11-20 11:30:55.425617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.825 [2024-11-20 11:30:55.425708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.825 [2024-11-20 11:30:55.425722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.825 [2024-11-20 11:30:55.425730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.825 [2024-11-20 11:30:55.425737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.825 [2024-11-20 11:30:55.425751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.825 qpair failed and we were unable to recover it. 00:30:02.825 [2024-11-20 11:30:55.435611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.825 [2024-11-20 11:30:55.435667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.825 [2024-11-20 11:30:55.435680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.825 [2024-11-20 11:30:55.435687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.825 [2024-11-20 11:30:55.435694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.825 [2024-11-20 11:30:55.435708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.825 qpair failed and we were unable to recover it. 00:30:02.826 [2024-11-20 11:30:55.445636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.826 [2024-11-20 11:30:55.445691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.826 [2024-11-20 11:30:55.445705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.826 [2024-11-20 11:30:55.445712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.826 [2024-11-20 11:30:55.445718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.826 [2024-11-20 11:30:55.445732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.826 qpair failed and we were unable to recover it. 00:30:02.826 [2024-11-20 11:30:55.455622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.826 [2024-11-20 11:30:55.455669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.826 [2024-11-20 11:30:55.455682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.826 [2024-11-20 11:30:55.455689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.826 [2024-11-20 11:30:55.455695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.826 [2024-11-20 11:30:55.455709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.826 qpair failed and we were unable to recover it. 00:30:02.826 [2024-11-20 11:30:55.465740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.826 [2024-11-20 11:30:55.465791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.826 [2024-11-20 11:30:55.465804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.826 [2024-11-20 11:30:55.465811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.826 [2024-11-20 11:30:55.465818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.826 [2024-11-20 11:30:55.465831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.826 qpair failed and we were unable to recover it. 00:30:02.826 [2024-11-20 11:30:55.475602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.826 [2024-11-20 11:30:55.475654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.826 [2024-11-20 11:30:55.475667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.826 [2024-11-20 11:30:55.475674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.826 [2024-11-20 11:30:55.475681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.826 [2024-11-20 11:30:55.475695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.826 qpair failed and we were unable to recover it. 00:30:02.826 [2024-11-20 11:30:55.485716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.826 [2024-11-20 11:30:55.485763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.826 [2024-11-20 11:30:55.485779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.826 [2024-11-20 11:30:55.485786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.826 [2024-11-20 11:30:55.485793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.826 [2024-11-20 11:30:55.485807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.826 qpair failed and we were unable to recover it. 00:30:02.826 [2024-11-20 11:30:55.495756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.826 [2024-11-20 11:30:55.495806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.826 [2024-11-20 11:30:55.495819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.826 [2024-11-20 11:30:55.495827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.826 [2024-11-20 11:30:55.495833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.826 [2024-11-20 11:30:55.495847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.826 qpair failed and we were unable to recover it. 00:30:02.826 [2024-11-20 11:30:55.505835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.826 [2024-11-20 11:30:55.505889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.826 [2024-11-20 11:30:55.505902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.826 [2024-11-20 11:30:55.505909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.826 [2024-11-20 11:30:55.505915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.826 [2024-11-20 11:30:55.505929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.826 qpair failed and we were unable to recover it. 00:30:02.826 [2024-11-20 11:30:55.515712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.826 [2024-11-20 11:30:55.515764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.826 [2024-11-20 11:30:55.515777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.826 [2024-11-20 11:30:55.515784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.826 [2024-11-20 11:30:55.515791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.826 [2024-11-20 11:30:55.515805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.826 qpair failed and we were unable to recover it. 00:30:02.826 [2024-11-20 11:30:55.525859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.826 [2024-11-20 11:30:55.525910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.826 [2024-11-20 11:30:55.525923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.826 [2024-11-20 11:30:55.525930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.826 [2024-11-20 11:30:55.525944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.826 [2024-11-20 11:30:55.525958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.826 qpair failed and we were unable to recover it. 00:30:02.826 [2024-11-20 11:30:55.535839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.827 [2024-11-20 11:30:55.535893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.827 [2024-11-20 11:30:55.535917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.827 [2024-11-20 11:30:55.535926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.827 [2024-11-20 11:30:55.535933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.827 [2024-11-20 11:30:55.535952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.827 qpair failed and we were unable to recover it. 00:30:02.827 [2024-11-20 11:30:55.545832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.827 [2024-11-20 11:30:55.545916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.827 [2024-11-20 11:30:55.545940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.827 [2024-11-20 11:30:55.545949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.827 [2024-11-20 11:30:55.545956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.827 [2024-11-20 11:30:55.545976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.827 qpair failed and we were unable to recover it. 00:30:02.827 [2024-11-20 11:30:55.555929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.827 [2024-11-20 11:30:55.555989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.827 [2024-11-20 11:30:55.556012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.827 [2024-11-20 11:30:55.556021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.827 [2024-11-20 11:30:55.556028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:02.827 [2024-11-20 11:30:55.556048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.827 qpair failed and we were unable to recover it. 00:30:03.089 [2024-11-20 11:30:55.565953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.089 [2024-11-20 11:30:55.566055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.089 [2024-11-20 11:30:55.566070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.089 [2024-11-20 11:30:55.566078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.089 [2024-11-20 11:30:55.566086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.089 [2024-11-20 11:30:55.566101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.089 qpair failed and we were unable to recover it. 00:30:03.089 [2024-11-20 11:30:55.575994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.089 [2024-11-20 11:30:55.576038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.089 [2024-11-20 11:30:55.576051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.089 [2024-11-20 11:30:55.576059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.089 [2024-11-20 11:30:55.576066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.089 [2024-11-20 11:30:55.576080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.089 qpair failed and we were unable to recover it. 00:30:03.089 [2024-11-20 11:30:55.586045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.089 [2024-11-20 11:30:55.586103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.089 [2024-11-20 11:30:55.586116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.089 [2024-11-20 11:30:55.586123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.089 [2024-11-20 11:30:55.586130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.089 [2024-11-20 11:30:55.586145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.089 qpair failed and we were unable to recover it. 00:30:03.089 [2024-11-20 11:30:55.596040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.089 [2024-11-20 11:30:55.596089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.089 [2024-11-20 11:30:55.596103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.089 [2024-11-20 11:30:55.596110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.089 [2024-11-20 11:30:55.596117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.089 [2024-11-20 11:30:55.596131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.089 qpair failed and we were unable to recover it. 00:30:03.089 [2024-11-20 11:30:55.606014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.090 [2024-11-20 11:30:55.606060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.090 [2024-11-20 11:30:55.606073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.090 [2024-11-20 11:30:55.606080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.090 [2024-11-20 11:30:55.606087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.090 [2024-11-20 11:30:55.606101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.090 qpair failed and we were unable to recover it. 00:30:03.090 [2024-11-20 11:30:55.616077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.090 [2024-11-20 11:30:55.616123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.090 [2024-11-20 11:30:55.616139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.090 [2024-11-20 11:30:55.616147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.090 [2024-11-20 11:30:55.616153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.090 [2024-11-20 11:30:55.616172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.090 qpair failed and we were unable to recover it. 00:30:03.090 [2024-11-20 11:30:55.626149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.090 [2024-11-20 11:30:55.626209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.090 [2024-11-20 11:30:55.626222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.090 [2024-11-20 11:30:55.626230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.090 [2024-11-20 11:30:55.626237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.090 [2024-11-20 11:30:55.626251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.090 qpair failed and we were unable to recover it. 00:30:03.090 [2024-11-20 11:30:55.636033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.090 [2024-11-20 11:30:55.636083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.090 [2024-11-20 11:30:55.636099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.090 [2024-11-20 11:30:55.636106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.090 [2024-11-20 11:30:55.636113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.090 [2024-11-20 11:30:55.636128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.090 qpair failed and we were unable to recover it. 00:30:03.090 [2024-11-20 11:30:55.646162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.090 [2024-11-20 11:30:55.646217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.090 [2024-11-20 11:30:55.646231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.090 [2024-11-20 11:30:55.646238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.090 [2024-11-20 11:30:55.646245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.090 [2024-11-20 11:30:55.646259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.090 qpair failed and we were unable to recover it. 00:30:03.090 [2024-11-20 11:30:55.656189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.090 [2024-11-20 11:30:55.656235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.090 [2024-11-20 11:30:55.656248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.090 [2024-11-20 11:30:55.656256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.090 [2024-11-20 11:30:55.656266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.090 [2024-11-20 11:30:55.656282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.090 qpair failed and we were unable to recover it. 00:30:03.090 [2024-11-20 11:30:55.666259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.090 [2024-11-20 11:30:55.666311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.090 [2024-11-20 11:30:55.666324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.090 [2024-11-20 11:30:55.666331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.090 [2024-11-20 11:30:55.666338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.090 [2024-11-20 11:30:55.666353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.090 qpair failed and we were unable to recover it. 00:30:03.090 [2024-11-20 11:30:55.676243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.090 [2024-11-20 11:30:55.676294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.090 [2024-11-20 11:30:55.676307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.090 [2024-11-20 11:30:55.676314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.090 [2024-11-20 11:30:55.676321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.090 [2024-11-20 11:30:55.676335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.090 qpair failed and we were unable to recover it. 00:30:03.090 [2024-11-20 11:30:55.686261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.090 [2024-11-20 11:30:55.686325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.090 [2024-11-20 11:30:55.686338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.090 [2024-11-20 11:30:55.686345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.090 [2024-11-20 11:30:55.686352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.090 [2024-11-20 11:30:55.686366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.090 qpair failed and we were unable to recover it. 00:30:03.090 [2024-11-20 11:30:55.696285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.090 [2024-11-20 11:30:55.696329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.090 [2024-11-20 11:30:55.696342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.090 [2024-11-20 11:30:55.696350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.090 [2024-11-20 11:30:55.696356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.090 [2024-11-20 11:30:55.696370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.090 qpair failed and we were unable to recover it. 00:30:03.090 [2024-11-20 11:30:55.706390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.090 [2024-11-20 11:30:55.706446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.090 [2024-11-20 11:30:55.706459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.090 [2024-11-20 11:30:55.706466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.090 [2024-11-20 11:30:55.706473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.090 [2024-11-20 11:30:55.706487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.090 qpair failed and we were unable to recover it. 00:30:03.090 [2024-11-20 11:30:55.716424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.090 [2024-11-20 11:30:55.716507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.090 [2024-11-20 11:30:55.716520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.090 [2024-11-20 11:30:55.716527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.090 [2024-11-20 11:30:55.716534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.090 [2024-11-20 11:30:55.716548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.090 qpair failed and we were unable to recover it. 00:30:03.090 [2024-11-20 11:30:55.726404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.090 [2024-11-20 11:30:55.726491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.090 [2024-11-20 11:30:55.726504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.090 [2024-11-20 11:30:55.726511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.090 [2024-11-20 11:30:55.726518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.090 [2024-11-20 11:30:55.726532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.090 qpair failed and we were unable to recover it. 00:30:03.090 [2024-11-20 11:30:55.736442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.091 [2024-11-20 11:30:55.736533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.091 [2024-11-20 11:30:55.736547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.091 [2024-11-20 11:30:55.736554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.091 [2024-11-20 11:30:55.736561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.091 [2024-11-20 11:30:55.736575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.091 qpair failed and we were unable to recover it. 00:30:03.091 [2024-11-20 11:30:55.746500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.091 [2024-11-20 11:30:55.746600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.091 [2024-11-20 11:30:55.746614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.091 [2024-11-20 11:30:55.746621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.091 [2024-11-20 11:30:55.746629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.091 [2024-11-20 11:30:55.746643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.091 qpair failed and we were unable to recover it. 00:30:03.091 [2024-11-20 11:30:55.756490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.091 [2024-11-20 11:30:55.756545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.091 [2024-11-20 11:30:55.756558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.091 [2024-11-20 11:30:55.756565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.091 [2024-11-20 11:30:55.756571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.091 [2024-11-20 11:30:55.756585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.091 qpair failed and we were unable to recover it. 00:30:03.091 [2024-11-20 11:30:55.766499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.091 [2024-11-20 11:30:55.766550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.091 [2024-11-20 11:30:55.766563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.091 [2024-11-20 11:30:55.766570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.091 [2024-11-20 11:30:55.766577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.091 [2024-11-20 11:30:55.766591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.091 qpair failed and we were unable to recover it. 00:30:03.091 [2024-11-20 11:30:55.776511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.091 [2024-11-20 11:30:55.776555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.091 [2024-11-20 11:30:55.776568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.091 [2024-11-20 11:30:55.776576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.091 [2024-11-20 11:30:55.776582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.091 [2024-11-20 11:30:55.776596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.091 qpair failed and we were unable to recover it. 00:30:03.091 [2024-11-20 11:30:55.786596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.091 [2024-11-20 11:30:55.786650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.091 [2024-11-20 11:30:55.786663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.091 [2024-11-20 11:30:55.786673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.091 [2024-11-20 11:30:55.786680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.091 [2024-11-20 11:30:55.786694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.091 qpair failed and we were unable to recover it. 00:30:03.091 [2024-11-20 11:30:55.796588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.091 [2024-11-20 11:30:55.796655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.091 [2024-11-20 11:30:55.796669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.091 [2024-11-20 11:30:55.796676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.091 [2024-11-20 11:30:55.796683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.091 [2024-11-20 11:30:55.796698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.091 qpair failed and we were unable to recover it. 00:30:03.091 [2024-11-20 11:30:55.806563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.091 [2024-11-20 11:30:55.806609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.091 [2024-11-20 11:30:55.806622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.091 [2024-11-20 11:30:55.806630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.091 [2024-11-20 11:30:55.806637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.091 [2024-11-20 11:30:55.806651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.091 qpair failed and we were unable to recover it. 00:30:03.091 [2024-11-20 11:30:55.816620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.091 [2024-11-20 11:30:55.816680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.091 [2024-11-20 11:30:55.816694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.091 [2024-11-20 11:30:55.816701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.091 [2024-11-20 11:30:55.816708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.091 [2024-11-20 11:30:55.816722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.091 qpair failed and we were unable to recover it. 00:30:03.091 [2024-11-20 11:30:55.826701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.091 [2024-11-20 11:30:55.826758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.091 [2024-11-20 11:30:55.826771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.091 [2024-11-20 11:30:55.826778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.091 [2024-11-20 11:30:55.826785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.091 [2024-11-20 11:30:55.826803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.091 qpair failed and we were unable to recover it. 00:30:03.353 [2024-11-20 11:30:55.836580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.353 [2024-11-20 11:30:55.836628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.353 [2024-11-20 11:30:55.836642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.353 [2024-11-20 11:30:55.836649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.353 [2024-11-20 11:30:55.836656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.353 [2024-11-20 11:30:55.836670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.353 qpair failed and we were unable to recover it. 00:30:03.353 [2024-11-20 11:30:55.846734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.353 [2024-11-20 11:30:55.846781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.353 [2024-11-20 11:30:55.846795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.353 [2024-11-20 11:30:55.846802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.353 [2024-11-20 11:30:55.846809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.353 [2024-11-20 11:30:55.846823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.353 qpair failed and we were unable to recover it. 00:30:03.353 [2024-11-20 11:30:55.856771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.353 [2024-11-20 11:30:55.856818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.353 [2024-11-20 11:30:55.856832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.353 [2024-11-20 11:30:55.856839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.353 [2024-11-20 11:30:55.856846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.353 [2024-11-20 11:30:55.856860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.353 qpair failed and we were unable to recover it. 00:30:03.353 [2024-11-20 11:30:55.866809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.353 [2024-11-20 11:30:55.866865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.353 [2024-11-20 11:30:55.866879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.353 [2024-11-20 11:30:55.866887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.353 [2024-11-20 11:30:55.866893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.353 [2024-11-20 11:30:55.866907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.353 qpair failed and we were unable to recover it. 00:30:03.353 [2024-11-20 11:30:55.876798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.353 [2024-11-20 11:30:55.876856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.353 [2024-11-20 11:30:55.876880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.353 [2024-11-20 11:30:55.876889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.353 [2024-11-20 11:30:55.876896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.353 [2024-11-20 11:30:55.876916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.354 qpair failed and we were unable to recover it. 00:30:03.354 [2024-11-20 11:30:55.886827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.354 [2024-11-20 11:30:55.886875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.354 [2024-11-20 11:30:55.886891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.354 [2024-11-20 11:30:55.886898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.354 [2024-11-20 11:30:55.886906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.354 [2024-11-20 11:30:55.886922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.354 qpair failed and we were unable to recover it. 00:30:03.354 [2024-11-20 11:30:55.896869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.354 [2024-11-20 11:30:55.896918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.354 [2024-11-20 11:30:55.896932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.354 [2024-11-20 11:30:55.896940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.354 [2024-11-20 11:30:55.896946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.354 [2024-11-20 11:30:55.896961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.354 qpair failed and we were unable to recover it. 00:30:03.354 [2024-11-20 11:30:55.906940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.354 [2024-11-20 11:30:55.907015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.354 [2024-11-20 11:30:55.907028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.354 [2024-11-20 11:30:55.907035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.354 [2024-11-20 11:30:55.907042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.354 [2024-11-20 11:30:55.907056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.354 qpair failed and we were unable to recover it. 00:30:03.354 [2024-11-20 11:30:55.916848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.354 [2024-11-20 11:30:55.916937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.354 [2024-11-20 11:30:55.916951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.354 [2024-11-20 11:30:55.916962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.354 [2024-11-20 11:30:55.916971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.354 [2024-11-20 11:30:55.916985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.354 qpair failed and we were unable to recover it. 00:30:03.354 [2024-11-20 11:30:55.926974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.354 [2024-11-20 11:30:55.927021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.354 [2024-11-20 11:30:55.927035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.354 [2024-11-20 11:30:55.927042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.354 [2024-11-20 11:30:55.927049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.354 [2024-11-20 11:30:55.927063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.354 qpair failed and we were unable to recover it. 00:30:03.354 [2024-11-20 11:30:55.936958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.354 [2024-11-20 11:30:55.937031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.354 [2024-11-20 11:30:55.937044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.354 [2024-11-20 11:30:55.937052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.354 [2024-11-20 11:30:55.937058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.354 [2024-11-20 11:30:55.937074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.354 qpair failed and we were unable to recover it. 00:30:03.354 [2024-11-20 11:30:55.947036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.354 [2024-11-20 11:30:55.947090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.354 [2024-11-20 11:30:55.947103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.354 [2024-11-20 11:30:55.947110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.354 [2024-11-20 11:30:55.947117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.354 [2024-11-20 11:30:55.947132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.354 qpair failed and we were unable to recover it. 00:30:03.354 [2024-11-20 11:30:55.957030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.354 [2024-11-20 11:30:55.957080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.354 [2024-11-20 11:30:55.957094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.354 [2024-11-20 11:30:55.957101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.354 [2024-11-20 11:30:55.957108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.354 [2024-11-20 11:30:55.957126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.354 qpair failed and we were unable to recover it. 00:30:03.354 [2024-11-20 11:30:55.967052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.354 [2024-11-20 11:30:55.967144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.354 [2024-11-20 11:30:55.967161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.354 [2024-11-20 11:30:55.967169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.354 [2024-11-20 11:30:55.967176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.354 [2024-11-20 11:30:55.967190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.354 qpair failed and we were unable to recover it. 00:30:03.354 [2024-11-20 11:30:55.977080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.354 [2024-11-20 11:30:55.977127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.354 [2024-11-20 11:30:55.977139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.354 [2024-11-20 11:30:55.977147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.354 [2024-11-20 11:30:55.977153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.354 [2024-11-20 11:30:55.977171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.354 qpair failed and we were unable to recover it. 00:30:03.354 [2024-11-20 11:30:55.987155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.354 [2024-11-20 11:30:55.987215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.354 [2024-11-20 11:30:55.987228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.354 [2024-11-20 11:30:55.987236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.354 [2024-11-20 11:30:55.987243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.354 [2024-11-20 11:30:55.987257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.354 qpair failed and we were unable to recover it. 00:30:03.354 [2024-11-20 11:30:55.997130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.354 [2024-11-20 11:30:55.997179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.354 [2024-11-20 11:30:55.997192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.354 [2024-11-20 11:30:55.997200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.354 [2024-11-20 11:30:55.997206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.354 [2024-11-20 11:30:55.997221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.354 qpair failed and we were unable to recover it. 00:30:03.354 [2024-11-20 11:30:56.007161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.354 [2024-11-20 11:30:56.007209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.354 [2024-11-20 11:30:56.007222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.354 [2024-11-20 11:30:56.007229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.354 [2024-11-20 11:30:56.007236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.355 [2024-11-20 11:30:56.007250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.355 qpair failed and we were unable to recover it. 00:30:03.355 [2024-11-20 11:30:56.017174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.355 [2024-11-20 11:30:56.017240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.355 [2024-11-20 11:30:56.017253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.355 [2024-11-20 11:30:56.017260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.355 [2024-11-20 11:30:56.017267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.355 [2024-11-20 11:30:56.017281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.355 qpair failed and we were unable to recover it. 00:30:03.355 [2024-11-20 11:30:56.027254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.355 [2024-11-20 11:30:56.027307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.355 [2024-11-20 11:30:56.027321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.355 [2024-11-20 11:30:56.027328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.355 [2024-11-20 11:30:56.027335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.355 [2024-11-20 11:30:56.027349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.355 qpair failed and we were unable to recover it. 00:30:03.355 [2024-11-20 11:30:56.037226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.355 [2024-11-20 11:30:56.037276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.355 [2024-11-20 11:30:56.037289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.355 [2024-11-20 11:30:56.037297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.355 [2024-11-20 11:30:56.037303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.355 [2024-11-20 11:30:56.037318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.355 qpair failed and we were unable to recover it. 00:30:03.355 [2024-11-20 11:30:56.047252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.355 [2024-11-20 11:30:56.047303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.355 [2024-11-20 11:30:56.047319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.355 [2024-11-20 11:30:56.047326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.355 [2024-11-20 11:30:56.047333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.355 [2024-11-20 11:30:56.047347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.355 qpair failed and we were unable to recover it. 00:30:03.355 [2024-11-20 11:30:56.057315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.355 [2024-11-20 11:30:56.057367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.355 [2024-11-20 11:30:56.057380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.355 [2024-11-20 11:30:56.057388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.355 [2024-11-20 11:30:56.057395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.355 [2024-11-20 11:30:56.057410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.355 qpair failed and we were unable to recover it. 00:30:03.355 [2024-11-20 11:30:56.067372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.355 [2024-11-20 11:30:56.067424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.355 [2024-11-20 11:30:56.067437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.355 [2024-11-20 11:30:56.067445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.355 [2024-11-20 11:30:56.067451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.355 [2024-11-20 11:30:56.067465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.355 qpair failed and we were unable to recover it. 00:30:03.355 [2024-11-20 11:30:56.077367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.355 [2024-11-20 11:30:56.077456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.355 [2024-11-20 11:30:56.077469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.355 [2024-11-20 11:30:56.077477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.355 [2024-11-20 11:30:56.077484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.355 [2024-11-20 11:30:56.077498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.355 qpair failed and we were unable to recover it. 00:30:03.355 [2024-11-20 11:30:56.087375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.355 [2024-11-20 11:30:56.087469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.355 [2024-11-20 11:30:56.087482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.355 [2024-11-20 11:30:56.087490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.355 [2024-11-20 11:30:56.087500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.355 [2024-11-20 11:30:56.087514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.355 qpair failed and we were unable to recover it. 00:30:03.616 [2024-11-20 11:30:56.097405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.616 [2024-11-20 11:30:56.097458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.616 [2024-11-20 11:30:56.097472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.617 [2024-11-20 11:30:56.097479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.617 [2024-11-20 11:30:56.097485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.617 [2024-11-20 11:30:56.097500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.617 qpair failed and we were unable to recover it. 00:30:03.617 [2024-11-20 11:30:56.107454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.617 [2024-11-20 11:30:56.107507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.617 [2024-11-20 11:30:56.107520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.617 [2024-11-20 11:30:56.107527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.617 [2024-11-20 11:30:56.107534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.617 [2024-11-20 11:30:56.107548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.617 qpair failed and we were unable to recover it. 00:30:03.617 [2024-11-20 11:30:56.117481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.617 [2024-11-20 11:30:56.117525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.617 [2024-11-20 11:30:56.117538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.617 [2024-11-20 11:30:56.117545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.617 [2024-11-20 11:30:56.117552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa99c000b90 00:30:03.617 [2024-11-20 11:30:56.117566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:03.617 qpair failed and we were unable to recover it. 00:30:03.617 [2024-11-20 11:30:56.127467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.617 [2024-11-20 11:30:56.127564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.617 [2024-11-20 11:30:56.127628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.617 [2024-11-20 11:30:56.127653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.617 [2024-11-20 11:30:56.127673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd170c0 00:30:03.617 [2024-11-20 11:30:56.127726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.617 qpair failed and we were unable to recover it. 00:30:03.617 [2024-11-20 11:30:56.137503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.617 [2024-11-20 11:30:56.137571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.617 [2024-11-20 11:30:56.137599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.617 [2024-11-20 11:30:56.137614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.617 [2024-11-20 11:30:56.137627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd170c0 00:30:03.617 [2024-11-20 11:30:56.137656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:03.617 qpair failed and we were unable to recover it. 00:30:03.617 Read completed with error (sct=0, sc=8) 00:30:03.617 starting I/O failed 00:30:03.617 Read completed with error (sct=0, sc=8) 00:30:03.617 starting I/O failed 00:30:03.617 Read completed with error (sct=0, sc=8) 00:30:03.617 starting I/O failed 00:30:03.617 Read completed with error (sct=0, sc=8) 00:30:03.617 starting I/O failed 00:30:03.617 Read completed with error (sct=0, sc=8) 00:30:03.617 starting I/O failed 00:30:03.617 Read completed with error (sct=0, sc=8) 00:30:03.617 starting I/O failed 00:30:03.617 Read completed with error (sct=0, sc=8) 00:30:03.617 starting I/O failed 00:30:03.617 Read completed with error (sct=0, sc=8) 00:30:03.617 starting I/O failed 00:30:03.617 Write completed with error (sct=0, sc=8) 00:30:03.617 starting I/O failed 00:30:03.617 Write completed with error (sct=0, sc=8) 00:30:03.617 starting I/O failed 00:30:03.617 Read completed with error (sct=0, sc=8) 00:30:03.617 starting I/O failed 00:30:03.617 Write completed with error (sct=0, sc=8) 00:30:03.617 starting I/O failed 00:30:03.617 Read completed with error (sct=0, sc=8) 00:30:03.617 starting I/O failed 00:30:03.617 Read completed with error (sct=0, sc=8) 00:30:03.617 starting I/O failed 00:30:03.617 Write completed with error (sct=0, sc=8) 00:30:03.617 starting I/O failed 00:30:03.617 Read completed with error (sct=0, sc=8) 00:30:03.617 starting I/O failed 00:30:03.617 Write completed with error (sct=0, sc=8) 00:30:03.617 starting I/O failed 00:30:03.617 Write completed with error (sct=0, sc=8) 00:30:03.617 starting I/O failed 00:30:03.617 Read completed with error (sct=0, sc=8) 00:30:03.617 starting I/O failed 00:30:03.617 Write completed with error (sct=0, sc=8) 00:30:03.617 starting I/O failed 00:30:03.617 Write completed with error (sct=0, sc=8) 00:30:03.617 starting I/O failed 00:30:03.617 Write completed with error (sct=0, sc=8) 00:30:03.617 starting I/O failed 00:30:03.617 Read completed with error (sct=0, sc=8) 00:30:03.617 starting I/O failed 00:30:03.617 Write completed with error (sct=0, sc=8) 00:30:03.617 starting I/O failed 00:30:03.617 Write completed with error (sct=0, sc=8) 00:30:03.617 starting I/O failed 00:30:03.617 Write completed with error (sct=0, sc=8) 00:30:03.617 starting I/O failed 00:30:03.617 Read completed with error (sct=0, sc=8) 00:30:03.617 starting I/O failed 00:30:03.617 Write completed with error (sct=0, sc=8) 00:30:03.617 starting I/O failed 00:30:03.617 Read completed with error (sct=0, sc=8) 00:30:03.617 starting I/O failed 00:30:03.617 Write completed with error (sct=0, sc=8) 00:30:03.617 starting I/O failed 00:30:03.617 Write completed with error (sct=0, sc=8) 00:30:03.617 starting I/O failed 00:30:03.617 Read completed with error (sct=0, sc=8) 00:30:03.617 starting I/O failed 00:30:03.617 [2024-11-20 11:30:56.138560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.617 [2024-11-20 11:30:56.147565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.617 [2024-11-20 11:30:56.147652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.617 [2024-11-20 11:30:56.147696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.617 [2024-11-20 11:30:56.147717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.617 [2024-11-20 11:30:56.147736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa998000b90 00:30:03.617 [2024-11-20 11:30:56.147778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.617 qpair failed and we were unable to recover it. 00:30:03.617 [2024-11-20 11:30:56.157582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:03.617 [2024-11-20 11:30:56.157660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:03.617 [2024-11-20 11:30:56.157696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:03.617 [2024-11-20 11:30:56.157715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:03.617 [2024-11-20 11:30:56.157733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa998000b90 00:30:03.617 [2024-11-20 11:30:56.157770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:03.617 qpair failed and we were unable to recover it. 00:30:03.617 [2024-11-20 11:30:56.157964] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:30:03.617 A controller has encountered a failure and is being reset. 00:30:03.617 [2024-11-20 11:30:56.158076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0ce00 (9): Bad file descriptor 00:30:03.617 Controller properly reset. 00:30:03.617 Initializing NVMe Controllers 00:30:03.617 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:03.617 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:03.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:03.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:03.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:03.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:03.617 Initialization complete. Launching workers. 00:30:03.617 Starting thread on core 1 00:30:03.617 Starting thread on core 2 00:30:03.617 Starting thread on core 3 00:30:03.617 Starting thread on core 0 00:30:03.617 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:03.617 00:30:03.617 real 0m11.477s 00:30:03.617 user 0m22.055s 00:30:03.617 sys 0m3.641s 00:30:03.617 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:03.617 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:03.617 ************************************ 00:30:03.617 END TEST nvmf_target_disconnect_tc2 00:30:03.617 ************************************ 00:30:03.617 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:03.617 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:03.617 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:03.617 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:03.617 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:30:03.618 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:03.618 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:30:03.618 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:03.618 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:03.618 rmmod nvme_tcp 00:30:03.618 rmmod nvme_fabrics 00:30:03.618 rmmod nvme_keyring 00:30:03.618 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:03.618 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:30:03.618 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:30:03.618 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2926206 ']' 00:30:03.618 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2926206 00:30:03.618 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2926206 ']' 00:30:03.618 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2926206 00:30:03.618 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:30:03.618 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:03.618 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2926206 00:30:03.878 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:30:03.878 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:30:03.878 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2926206' 00:30:03.878 killing process with pid 2926206 00:30:03.878 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2926206 00:30:03.878 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2926206 00:30:03.878 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:03.878 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:03.878 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:03.878 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:30:03.878 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:30:03.878 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:03.878 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:30:03.878 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:03.878 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:03.878 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:03.878 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:03.878 11:30:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.424 11:30:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:06.424 00:30:06.424 real 0m21.719s 00:30:06.424 user 0m49.965s 00:30:06.424 sys 0m9.740s 00:30:06.424 11:30:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:06.424 11:30:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:06.424 ************************************ 00:30:06.424 END TEST nvmf_target_disconnect 00:30:06.424 ************************************ 00:30:06.424 11:30:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:06.424 00:30:06.424 real 6m33.828s 00:30:06.424 user 11m32.125s 00:30:06.424 sys 2m15.072s 00:30:06.424 11:30:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:06.424 11:30:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.424 ************************************ 00:30:06.424 END TEST nvmf_host 00:30:06.424 ************************************ 00:30:06.424 11:30:58 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:30:06.424 11:30:58 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:30:06.424 11:30:58 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:06.424 11:30:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:06.424 11:30:58 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:06.424 11:30:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:06.424 ************************************ 00:30:06.424 START TEST nvmf_target_core_interrupt_mode 00:30:06.424 ************************************ 00:30:06.424 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:06.424 * Looking for test storage... 00:30:06.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:06.424 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:06.424 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:30:06.424 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:06.424 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:06.424 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:06.424 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:06.424 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:06.424 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:30:06.424 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:30:06.424 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:30:06.424 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:30:06.424 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:30:06.424 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:30:06.424 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:30:06.424 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:06.424 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:30:06.424 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:30:06.424 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:06.424 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:06.424 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:30:06.424 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:30:06.424 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:06.424 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:06.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.425 --rc genhtml_branch_coverage=1 00:30:06.425 --rc genhtml_function_coverage=1 00:30:06.425 --rc genhtml_legend=1 00:30:06.425 --rc geninfo_all_blocks=1 00:30:06.425 --rc geninfo_unexecuted_blocks=1 00:30:06.425 00:30:06.425 ' 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:06.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.425 --rc genhtml_branch_coverage=1 00:30:06.425 --rc genhtml_function_coverage=1 00:30:06.425 --rc genhtml_legend=1 00:30:06.425 --rc geninfo_all_blocks=1 00:30:06.425 --rc geninfo_unexecuted_blocks=1 00:30:06.425 00:30:06.425 ' 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:06.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.425 --rc genhtml_branch_coverage=1 00:30:06.425 --rc genhtml_function_coverage=1 00:30:06.425 --rc genhtml_legend=1 00:30:06.425 --rc geninfo_all_blocks=1 00:30:06.425 --rc geninfo_unexecuted_blocks=1 00:30:06.425 00:30:06.425 ' 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:06.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.425 --rc genhtml_branch_coverage=1 00:30:06.425 --rc genhtml_function_coverage=1 00:30:06.425 --rc genhtml_legend=1 00:30:06.425 --rc geninfo_all_blocks=1 00:30:06.425 --rc geninfo_unexecuted_blocks=1 00:30:06.425 00:30:06.425 ' 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:06.425 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:06.425 ************************************ 00:30:06.425 START TEST nvmf_abort 00:30:06.425 ************************************ 00:30:06.425 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:06.425 * Looking for test storage... 00:30:06.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:06.425 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:06.425 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:30:06.425 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:06.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.691 --rc genhtml_branch_coverage=1 00:30:06.691 --rc genhtml_function_coverage=1 00:30:06.691 --rc genhtml_legend=1 00:30:06.691 --rc geninfo_all_blocks=1 00:30:06.691 --rc geninfo_unexecuted_blocks=1 00:30:06.691 00:30:06.691 ' 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:06.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.691 --rc genhtml_branch_coverage=1 00:30:06.691 --rc genhtml_function_coverage=1 00:30:06.691 --rc genhtml_legend=1 00:30:06.691 --rc geninfo_all_blocks=1 00:30:06.691 --rc geninfo_unexecuted_blocks=1 00:30:06.691 00:30:06.691 ' 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:06.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.691 --rc genhtml_branch_coverage=1 00:30:06.691 --rc genhtml_function_coverage=1 00:30:06.691 --rc genhtml_legend=1 00:30:06.691 --rc geninfo_all_blocks=1 00:30:06.691 --rc geninfo_unexecuted_blocks=1 00:30:06.691 00:30:06.691 ' 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:06.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.691 --rc genhtml_branch_coverage=1 00:30:06.691 --rc genhtml_function_coverage=1 00:30:06.691 --rc genhtml_legend=1 00:30:06.691 --rc geninfo_all_blocks=1 00:30:06.691 --rc geninfo_unexecuted_blocks=1 00:30:06.691 00:30:06.691 ' 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.691 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.692 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.692 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:30:06.692 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.692 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:30:06.692 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:06.692 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:06.692 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:06.692 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:06.692 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:06.692 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:06.692 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:06.692 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:06.692 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:06.692 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:06.692 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:06.692 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:30:06.692 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:30:06.692 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:06.692 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:06.692 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:06.692 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:06.692 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:06.692 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.692 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:06.692 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.692 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:06.692 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:06.692 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:30:06.692 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:14.874 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:14.874 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:14.874 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:14.874 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:14.874 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:14.875 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:14.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.579 ms 00:30:14.875 00:30:14.875 --- 10.0.0.2 ping statistics --- 00:30:14.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.875 rtt min/avg/max/mdev = 0.579/0.579/0.579/0.000 ms 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:14.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:14.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:30:14.875 00:30:14.875 --- 10.0.0.1 ping statistics --- 00:30:14.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.875 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2931725 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2931725 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2931725 ']' 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:14.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:14.875 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:14.875 [2024-11-20 11:31:06.877120] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:14.875 [2024-11-20 11:31:06.878256] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:30:14.875 [2024-11-20 11:31:06.878307] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:14.875 [2024-11-20 11:31:06.977112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:14.875 [2024-11-20 11:31:07.028550] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:14.875 [2024-11-20 11:31:07.028597] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:14.875 [2024-11-20 11:31:07.028606] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:14.875 [2024-11-20 11:31:07.028613] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:14.875 [2024-11-20 11:31:07.028620] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:14.875 [2024-11-20 11:31:07.030408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:14.875 [2024-11-20 11:31:07.030570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.875 [2024-11-20 11:31:07.030571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:14.875 [2024-11-20 11:31:07.106670] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:14.875 [2024-11-20 11:31:07.107603] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:14.875 [2024-11-20 11:31:07.108090] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:14.875 [2024-11-20 11:31:07.108240] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:15.137 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:15.137 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:30:15.137 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:15.137 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:15.137 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:15.137 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:15.137 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:30:15.137 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.137 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:15.137 [2024-11-20 11:31:07.743469] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:15.137 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.137 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:30:15.137 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.137 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:15.137 Malloc0 00:30:15.137 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.137 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:15.137 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.137 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:15.137 Delay0 00:30:15.137 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.137 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:15.137 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.137 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:15.137 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.137 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:30:15.137 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.137 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:15.137 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.137 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:15.137 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.137 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:15.138 [2024-11-20 11:31:07.839431] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:15.138 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.138 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:15.138 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.138 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:15.138 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.138 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:30:15.399 [2024-11-20 11:31:07.940111] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:17.312 Initializing NVMe Controllers 00:30:17.312 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:17.312 controller IO queue size 128 less than required 00:30:17.312 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:30:17.312 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:30:17.312 Initialization complete. Launching workers. 00:30:17.312 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28552 00:30:17.312 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28609, failed to submit 66 00:30:17.312 success 28552, unsuccessful 57, failed 0 00:30:17.312 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:17.312 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.312 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:17.312 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.312 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:30:17.312 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:30:17.312 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:17.312 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:30:17.312 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:17.312 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:30:17.312 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:17.312 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:17.312 rmmod nvme_tcp 00:30:17.573 rmmod nvme_fabrics 00:30:17.573 rmmod nvme_keyring 00:30:17.573 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:17.573 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:30:17.573 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:30:17.573 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2931725 ']' 00:30:17.573 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2931725 00:30:17.573 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2931725 ']' 00:30:17.573 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2931725 00:30:17.573 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:30:17.573 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:17.573 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2931725 00:30:17.573 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:17.573 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:17.573 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2931725' 00:30:17.573 killing process with pid 2931725 00:30:17.573 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2931725 00:30:17.573 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2931725 00:30:17.835 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:17.835 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:17.835 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:17.835 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:30:17.835 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:30:17.835 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:17.835 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:30:17.835 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:17.835 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:17.835 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:17.835 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:17.835 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:19.747 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:19.747 00:30:19.747 real 0m13.408s 00:30:19.747 user 0m10.943s 00:30:19.747 sys 0m6.886s 00:30:19.747 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:19.747 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:19.747 ************************************ 00:30:19.747 END TEST nvmf_abort 00:30:19.747 ************************************ 00:30:19.747 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:19.747 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:19.747 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:19.747 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:20.008 ************************************ 00:30:20.008 START TEST nvmf_ns_hotplug_stress 00:30:20.008 ************************************ 00:30:20.008 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:20.008 * Looking for test storage... 00:30:20.008 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:20.008 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:20.008 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:30:20.008 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:20.008 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:20.008 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:20.008 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:20.008 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:20.008 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:30:20.008 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:30:20.008 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:30:20.008 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:30:20.008 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:30:20.008 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:30:20.008 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:30:20.008 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:20.008 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:30:20.008 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:30:20.008 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:20.008 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:20.008 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:30:20.008 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:30:20.008 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:20.008 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:30:20.008 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:30:20.009 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:30:20.009 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:30:20.009 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:20.009 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:30:20.009 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:30:20.009 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:20.009 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:20.009 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:30:20.009 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:20.009 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:20.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.009 --rc genhtml_branch_coverage=1 00:30:20.009 --rc genhtml_function_coverage=1 00:30:20.009 --rc genhtml_legend=1 00:30:20.009 --rc geninfo_all_blocks=1 00:30:20.009 --rc geninfo_unexecuted_blocks=1 00:30:20.009 00:30:20.009 ' 00:30:20.009 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:20.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.009 --rc genhtml_branch_coverage=1 00:30:20.009 --rc genhtml_function_coverage=1 00:30:20.009 --rc genhtml_legend=1 00:30:20.009 --rc geninfo_all_blocks=1 00:30:20.009 --rc geninfo_unexecuted_blocks=1 00:30:20.009 00:30:20.009 ' 00:30:20.009 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:20.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.009 --rc genhtml_branch_coverage=1 00:30:20.009 --rc genhtml_function_coverage=1 00:30:20.009 --rc genhtml_legend=1 00:30:20.009 --rc geninfo_all_blocks=1 00:30:20.009 --rc geninfo_unexecuted_blocks=1 00:30:20.009 00:30:20.009 ' 00:30:20.009 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:20.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.009 --rc genhtml_branch_coverage=1 00:30:20.009 --rc genhtml_function_coverage=1 00:30:20.009 --rc genhtml_legend=1 00:30:20.009 --rc geninfo_all_blocks=1 00:30:20.009 --rc geninfo_unexecuted_blocks=1 00:30:20.009 00:30:20.009 ' 00:30:20.009 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:20.009 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:30:20.009 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:20.009 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:20.009 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:20.009 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:20.009 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:20.009 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:20.009 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:20.009 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:20.009 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:20.009 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:20.270 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:20.270 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:20.270 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:20.270 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:20.270 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:20.270 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:20.270 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:20.270 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:30:20.270 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:20.270 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:20.270 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:20.270 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.270 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.270 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.270 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:30:20.270 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.270 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:30:20.270 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:20.270 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:20.270 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:20.270 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:20.270 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:20.270 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:20.270 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:20.270 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:20.270 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:20.270 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:20.270 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:20.270 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:30:20.270 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:20.270 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:20.270 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:20.270 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:20.270 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:20.270 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:20.271 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:20.271 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:20.271 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:20.271 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:20.271 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:30:20.271 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:28.413 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:28.413 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:28.413 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:28.413 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:28.413 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:28.414 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:28.414 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:28.414 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:28.414 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:28.414 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:28.414 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:28.414 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:28.414 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:28.414 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:28.414 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:28.414 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:28.414 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:28.414 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:28.414 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:28.414 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:28.414 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:28.414 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:28.414 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:28.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:28.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:30:28.414 00:30:28.414 --- 10.0.0.2 ping statistics --- 00:30:28.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.414 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:30:28.414 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:28.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:28.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:30:28.414 00:30:28.414 --- 10.0.0.1 ping statistics --- 00:30:28.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.414 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:30:28.414 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:28.414 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:30:28.414 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:28.414 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:28.414 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:28.414 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:28.414 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:28.414 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:28.414 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:28.414 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:30:28.414 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:28.414 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:28.414 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:28.414 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2936647 00:30:28.414 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2936647 00:30:28.414 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:28.414 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2936647 ']' 00:30:28.414 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:28.414 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:28.414 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:28.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:28.414 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:28.414 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:28.414 [2024-11-20 11:31:20.301549] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:28.414 [2024-11-20 11:31:20.302689] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:30:28.414 [2024-11-20 11:31:20.302740] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:28.414 [2024-11-20 11:31:20.402618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:28.414 [2024-11-20 11:31:20.454089] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:28.414 [2024-11-20 11:31:20.454137] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:28.414 [2024-11-20 11:31:20.454151] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:28.414 [2024-11-20 11:31:20.454166] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:28.414 [2024-11-20 11:31:20.454173] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:28.414 [2024-11-20 11:31:20.455930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:28.414 [2024-11-20 11:31:20.456092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:28.414 [2024-11-20 11:31:20.456093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:28.414 [2024-11-20 11:31:20.531947] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:28.414 [2024-11-20 11:31:20.532929] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:28.414 [2024-11-20 11:31:20.533369] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:28.414 [2024-11-20 11:31:20.533535] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:28.414 11:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:28.414 11:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:30:28.414 11:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:28.414 11:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:28.414 11:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:28.675 11:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:28.675 11:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:30:28.675 11:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:28.675 [2024-11-20 11:31:21.328968] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:28.676 11:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:28.937 11:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:29.198 [2024-11-20 11:31:21.713675] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:29.198 11:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:29.198 11:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:30:29.459 Malloc0 00:30:29.459 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:29.720 Delay0 00:30:29.720 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:29.980 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:30:29.980 NULL1 00:30:29.980 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:30:30.241 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2937021 00:30:30.241 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:30.241 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:30:30.241 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:30.501 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:30.762 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:30:30.762 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:30:30.762 true 00:30:30.762 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:30.762 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:31.023 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:31.284 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:30:31.284 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:30:31.545 true 00:30:31.545 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:31.545 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:31.805 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:31.805 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:30:31.805 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:30:32.066 true 00:30:32.066 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:32.066 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:32.327 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:32.588 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:30:32.588 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:30:32.588 true 00:30:32.588 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:32.588 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:32.848 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:33.110 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:30:33.110 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:30:33.370 true 00:30:33.370 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:33.370 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:33.371 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:33.631 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:30:33.631 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:30:33.891 true 00:30:33.891 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:33.891 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:34.152 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:34.152 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:30:34.152 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:30:34.413 true 00:30:34.413 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:34.413 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:34.674 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:34.674 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:30:34.674 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:30:34.933 true 00:30:34.933 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:34.933 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:35.193 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:35.454 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:30:35.454 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:30:35.454 true 00:30:35.454 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:35.454 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:35.715 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:35.978 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:30:35.978 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:30:35.978 true 00:30:35.978 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:35.978 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:36.239 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:36.500 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:30:36.500 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:30:36.760 true 00:30:36.760 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:36.760 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:36.760 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:37.020 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:30:37.021 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:30:37.281 true 00:30:37.281 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:37.281 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:37.542 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:37.542 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:30:37.542 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:30:37.803 true 00:30:37.803 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:37.803 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:38.064 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:38.064 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:30:38.064 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:30:38.372 true 00:30:38.372 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:38.372 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:38.632 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:38.633 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:30:38.633 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:30:38.893 true 00:30:38.893 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:38.893 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:39.154 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:39.154 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:30:39.154 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:30:39.414 true 00:30:39.414 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:39.414 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:39.675 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:39.936 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:30:39.936 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:30:39.936 true 00:30:39.936 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:39.936 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:40.196 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:40.458 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:30:40.458 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:30:40.458 true 00:30:40.458 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:40.458 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:40.718 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:40.977 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:30:40.977 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:30:40.977 true 00:30:41.236 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:41.236 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:41.236 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:41.495 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:30:41.495 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:30:41.754 true 00:30:41.754 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:41.754 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:41.754 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:42.013 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:30:42.013 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:30:42.273 true 00:30:42.273 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:42.273 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:42.532 11:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:42.533 11:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:30:42.533 11:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:30:42.792 true 00:30:42.792 11:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:42.792 11:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:43.051 11:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:43.051 11:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:30:43.051 11:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:30:43.310 true 00:30:43.310 11:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:43.310 11:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:43.571 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:43.831 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:30:43.831 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:30:43.831 true 00:30:43.831 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:43.831 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:44.090 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:44.350 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:30:44.350 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:30:44.350 true 00:30:44.350 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:44.350 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:44.609 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:44.868 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:30:44.868 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:30:45.126 true 00:30:45.126 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:45.126 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:45.126 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:45.385 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:30:45.385 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:30:45.644 true 00:30:45.644 11:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:45.644 11:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:45.644 11:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:45.903 11:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:30:45.903 11:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:30:46.162 true 00:30:46.162 11:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:46.162 11:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:46.422 11:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:46.422 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:30:46.422 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:30:46.682 true 00:30:46.682 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:46.682 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:46.942 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:46.942 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:30:46.942 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:30:47.203 true 00:30:47.203 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:47.203 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:47.463 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:47.463 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:30:47.463 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:30:47.724 true 00:30:47.724 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:47.724 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:47.986 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:48.248 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:30:48.248 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:30:48.248 true 00:30:48.248 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:48.248 11:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:48.509 11:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:48.769 11:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:30:48.769 11:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:30:48.769 true 00:30:48.769 11:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:48.769 11:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:49.029 11:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:49.289 11:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:30:49.289 11:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:30:49.289 true 00:30:49.550 11:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:49.550 11:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:49.550 11:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:49.809 11:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:30:49.809 11:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:30:50.071 true 00:30:50.071 11:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:50.071 11:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:50.071 11:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:50.332 11:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:30:50.332 11:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:30:50.593 true 00:30:50.593 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:50.593 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:50.854 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:50.854 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:30:50.854 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:30:51.115 true 00:30:51.115 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:51.115 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:51.376 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:51.376 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:30:51.376 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:30:51.637 true 00:30:51.637 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:51.637 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:51.898 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:52.160 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:30:52.160 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:30:52.160 true 00:30:52.160 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:52.160 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:52.421 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:52.687 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:30:52.687 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:30:52.990 true 00:30:52.990 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:52.990 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:52.990 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:53.289 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:30:53.289 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:30:53.289 true 00:30:53.289 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:53.289 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:53.568 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:53.828 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:30:53.829 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:30:53.829 true 00:30:54.088 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:54.088 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:54.088 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:54.347 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:30:54.347 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:30:54.607 true 00:30:54.607 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:54.607 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:54.607 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:54.867 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:30:54.867 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:30:55.128 true 00:30:55.128 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:55.128 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:55.389 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:55.389 11:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:30:55.389 11:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:30:55.650 true 00:30:55.650 11:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:55.650 11:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:55.911 11:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:55.911 11:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:30:55.911 11:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:30:56.171 true 00:30:56.171 11:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:56.171 11:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:56.430 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:56.690 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:30:56.690 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:30:56.690 true 00:30:56.690 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:56.690 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:56.949 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:57.210 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:30:57.210 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:30:57.210 true 00:30:57.210 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:57.210 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:57.470 11:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:57.731 11:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:30:57.731 11:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:30:57.731 true 00:30:57.992 11:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:57.992 11:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:57.992 11:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:58.252 11:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:30:58.252 11:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:30:58.512 true 00:30:58.512 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:58.512 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:58.512 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:58.772 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:30:58.772 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:30:59.032 true 00:30:59.032 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:59.032 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:59.293 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:59.293 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:30:59.293 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:30:59.553 true 00:30:59.553 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:30:59.553 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:59.919 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:59.919 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:30:59.919 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:31:00.194 true 00:31:00.194 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:31:00.194 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:00.194 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:00.456 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:31:00.456 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:31:00.456 Initializing NVMe Controllers 00:31:00.456 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:00.456 Controller IO queue size 128, less than required. 00:31:00.456 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:00.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:00.456 Initialization complete. Launching workers. 00:31:00.456 ======================================================== 00:31:00.456 Latency(us) 00:31:00.456 Device Information : IOPS MiB/s Average min max 00:31:00.456 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30174.71 14.73 4241.87 1130.07 10996.34 00:31:00.456 ======================================================== 00:31:00.456 Total : 30174.71 14.73 4241.87 1130.07 10996.34 00:31:00.456 00:31:00.717 true 00:31:00.717 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2937021 00:31:00.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2937021) - No such process 00:31:00.717 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2937021 00:31:00.717 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:00.717 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:00.978 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:31:00.978 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:31:00.978 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:31:00.978 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:00.978 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:31:01.239 null0 00:31:01.239 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:01.239 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:01.239 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:31:01.239 null1 00:31:01.239 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:01.239 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:01.239 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:31:01.500 null2 00:31:01.500 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:01.500 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:01.500 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:31:01.761 null3 00:31:01.761 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:01.761 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:01.761 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:31:01.761 null4 00:31:01.761 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:01.761 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:01.761 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:31:02.022 null5 00:31:02.022 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:02.022 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:02.022 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:31:02.284 null6 00:31:02.284 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:02.284 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:02.284 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:31:02.284 null7 00:31:02.284 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:02.284 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:02.284 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:31:02.284 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:02.284 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:02.284 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:02.284 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:02.284 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:31:02.284 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:31:02.284 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:02.284 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.284 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:02.284 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:31:02.284 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:02.284 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:31:02.284 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:02.284 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:02.284 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.284 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:02.284 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:02.284 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:02.284 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:02.284 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:02.284 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:31:02.284 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:31:02.284 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:02.284 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.284 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:02.284 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:02.284 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:02.284 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:02.284 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:31:02.284 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:31:02.284 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:02.284 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.284 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:02.284 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:02.284 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:02.284 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:02.284 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:31:02.285 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:31:02.285 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:02.285 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.285 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:02.285 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:02.285 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:02.285 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:02.285 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:31:02.285 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:31:02.285 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:02.285 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:02.285 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.285 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:02.285 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:02.285 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:02.285 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:31:02.285 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:31:02.285 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:02.285 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.285 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:02.285 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:02.285 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:02.285 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:02.285 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2943229 2943230 2943234 2943236 2943239 2943242 2943245 2943248 00:31:02.285 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:31:02.285 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:31:02.285 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:02.285 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.285 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:02.546 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:02.546 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:02.546 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:02.546 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:02.546 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:02.546 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:02.546 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:02.546 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:02.808 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.808 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.808 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:02.808 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.808 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.808 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:02.808 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.808 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.808 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:02.808 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.808 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.808 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:02.808 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.808 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.808 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:02.808 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.808 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.808 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:02.808 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.808 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.808 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:02.808 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:02.808 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:02.808 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:02.808 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:03.070 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:03.070 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:03.070 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:03.070 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:03.070 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:03.070 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:03.070 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:03.070 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.070 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.070 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:03.070 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.070 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.070 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:03.070 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.070 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.070 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:03.331 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.331 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.331 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:03.331 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.331 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.331 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:03.331 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.331 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.331 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:03.331 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.331 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.331 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:03.331 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.331 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.331 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:03.331 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:03.331 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:03.331 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:03.331 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:03.331 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:03.331 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:03.331 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:03.331 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:03.593 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.593 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.593 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:03.593 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.593 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.593 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:03.593 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.593 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.593 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:03.593 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.593 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.593 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:03.593 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.593 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.593 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:03.593 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.593 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.593 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:03.593 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.593 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.593 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:03.593 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.593 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.593 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:03.593 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:03.593 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:03.855 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:03.855 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:03.855 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:03.855 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:03.855 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:03.855 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:03.855 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.855 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.855 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:03.855 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.855 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.855 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:03.855 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.855 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.855 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:03.855 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.856 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.856 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.856 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:03.856 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.856 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:03.856 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.856 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.856 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:03.856 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:03.856 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:03.856 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:04.117 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.117 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.117 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:04.117 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:04.117 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.117 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:04.117 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:04.117 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:04.117 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:04.117 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:04.117 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:04.378 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.378 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.378 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:04.378 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.378 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.378 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:04.378 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.378 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.378 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:04.378 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.378 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.378 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:04.379 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.379 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.379 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:04.379 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.379 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.379 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:04.379 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.379 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.379 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:04.379 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.379 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.379 11:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:04.379 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.379 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:04.379 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:04.379 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:04.379 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:04.379 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:04.640 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:04.640 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:04.640 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.640 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.640 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:04.640 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.640 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.640 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:04.640 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.640 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.640 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:04.640 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.640 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.640 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:04.640 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.640 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.640 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:04.640 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.640 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.640 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:04.640 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.640 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.640 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:04.640 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.640 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.640 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:04.902 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.902 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:04.902 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:04.902 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:04.902 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:04.902 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:04.902 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:04.902 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:04.902 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.902 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.902 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:04.902 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:04.902 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:04.902 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:05.163 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.163 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.163 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:05.163 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.163 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.163 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:05.163 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.163 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.163 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:05.163 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.163 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.163 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:05.163 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.163 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.163 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:05.163 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.163 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.163 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:05.163 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.163 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:05.163 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:05.163 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:05.163 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:05.163 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:05.163 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:05.423 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:05.424 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.424 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.424 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:05.424 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.424 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.424 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:05.424 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.424 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.424 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:05.424 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.424 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.424 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:05.424 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.424 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.424 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:05.424 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.424 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.424 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:05.424 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.424 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.424 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:05.424 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.424 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.424 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:05.424 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.684 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:05.684 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:05.684 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:05.684 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:05.685 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:05.685 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:05.685 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:05.685 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.685 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.685 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:05.685 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.685 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.685 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:05.685 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.685 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.685 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:05.944 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.944 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.944 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:05.944 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.944 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.944 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:05.944 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.944 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.944 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:05.944 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.944 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.944 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.944 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:05.944 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:05.944 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:05.944 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.944 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:05.944 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:05.944 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:05.944 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:05.944 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:05.945 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:05.945 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:05.945 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:05.945 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.205 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.205 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.205 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.205 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.205 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.205 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.205 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.205 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.205 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.205 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.205 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.205 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.205 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:06.205 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:06.205 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:31:06.205 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:31:06.205 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:06.205 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:31:06.205 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:06.205 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:31:06.205 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:06.205 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:06.205 rmmod nvme_tcp 00:31:06.205 rmmod nvme_fabrics 00:31:06.205 rmmod nvme_keyring 00:31:06.464 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:06.464 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:31:06.464 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:31:06.465 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2936647 ']' 00:31:06.465 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2936647 00:31:06.465 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2936647 ']' 00:31:06.465 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2936647 00:31:06.465 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:31:06.465 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:06.465 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2936647 00:31:06.465 11:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:06.465 11:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:06.465 11:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2936647' 00:31:06.465 killing process with pid 2936647 00:31:06.465 11:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2936647 00:31:06.465 11:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2936647 00:31:06.465 11:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:06.465 11:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:06.465 11:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:06.465 11:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:31:06.465 11:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:31:06.465 11:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:06.465 11:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:31:06.724 11:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:06.724 11:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:06.724 11:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:06.724 11:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:06.724 11:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:08.639 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:08.639 00:31:08.639 real 0m48.758s 00:31:08.639 user 3m1.661s 00:31:08.639 sys 0m22.143s 00:31:08.639 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:08.639 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:08.639 ************************************ 00:31:08.639 END TEST nvmf_ns_hotplug_stress 00:31:08.639 ************************************ 00:31:08.639 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:08.639 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:08.639 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:08.639 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:08.639 ************************************ 00:31:08.639 START TEST nvmf_delete_subsystem 00:31:08.639 ************************************ 00:31:08.639 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:08.901 * Looking for test storage... 00:31:08.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:08.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.901 --rc genhtml_branch_coverage=1 00:31:08.901 --rc genhtml_function_coverage=1 00:31:08.901 --rc genhtml_legend=1 00:31:08.901 --rc geninfo_all_blocks=1 00:31:08.901 --rc geninfo_unexecuted_blocks=1 00:31:08.901 00:31:08.901 ' 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:08.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.901 --rc genhtml_branch_coverage=1 00:31:08.901 --rc genhtml_function_coverage=1 00:31:08.901 --rc genhtml_legend=1 00:31:08.901 --rc geninfo_all_blocks=1 00:31:08.901 --rc geninfo_unexecuted_blocks=1 00:31:08.901 00:31:08.901 ' 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:08.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.901 --rc genhtml_branch_coverage=1 00:31:08.901 --rc genhtml_function_coverage=1 00:31:08.901 --rc genhtml_legend=1 00:31:08.901 --rc geninfo_all_blocks=1 00:31:08.901 --rc geninfo_unexecuted_blocks=1 00:31:08.901 00:31:08.901 ' 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:08.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.901 --rc genhtml_branch_coverage=1 00:31:08.901 --rc genhtml_function_coverage=1 00:31:08.901 --rc genhtml_legend=1 00:31:08.901 --rc geninfo_all_blocks=1 00:31:08.901 --rc geninfo_unexecuted_blocks=1 00:31:08.901 00:31:08.901 ' 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:08.901 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:08.902 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.902 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.902 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.902 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:31:08.902 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.902 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:31:08.902 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:08.902 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:08.902 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:08.902 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:08.902 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:08.902 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:08.902 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:08.902 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:08.902 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:08.902 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:08.902 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:31:08.902 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:08.902 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:08.902 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:08.902 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:08.902 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:08.902 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.902 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:08.902 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:08.902 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:08.902 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:08.902 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:31:08.902 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:17.042 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:17.042 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:31:17.042 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:17.042 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:17.042 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:17.042 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:17.042 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:17.043 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:17.043 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:17.043 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:17.043 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:17.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:17.043 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:31:17.043 00:31:17.043 --- 10.0.0.2 ping statistics --- 00:31:17.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.043 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:17.043 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:17.043 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:31:17.043 00:31:17.043 --- 10.0.0.1 ping statistics --- 00:31:17.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.043 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:17.043 11:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:17.043 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:31:17.043 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:17.043 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:17.043 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:17.043 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2948339 00:31:17.043 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2948339 00:31:17.043 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:17.043 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2948339 ']' 00:31:17.043 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:17.043 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:17.043 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:17.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:17.043 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:17.043 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:17.043 [2024-11-20 11:32:09.105428] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:17.044 [2024-11-20 11:32:09.106546] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:31:17.044 [2024-11-20 11:32:09.106599] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:17.044 [2024-11-20 11:32:09.207281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:17.044 [2024-11-20 11:32:09.258968] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:17.044 [2024-11-20 11:32:09.259016] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:17.044 [2024-11-20 11:32:09.259025] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:17.044 [2024-11-20 11:32:09.259032] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:17.044 [2024-11-20 11:32:09.259039] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:17.044 [2024-11-20 11:32:09.260672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:17.044 [2024-11-20 11:32:09.260677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.044 [2024-11-20 11:32:09.337021] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:17.044 [2024-11-20 11:32:09.337530] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:17.044 [2024-11-20 11:32:09.337872] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:17.305 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:17.305 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:31:17.305 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:17.305 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:17.305 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:17.305 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:17.305 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:17.305 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.305 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:17.305 [2024-11-20 11:32:09.969708] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:17.305 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.305 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:17.305 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.305 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:17.305 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.305 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:17.305 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.305 11:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:17.305 [2024-11-20 11:32:10.004393] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:17.306 11:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.306 11:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:31:17.306 11:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.306 11:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:17.306 NULL1 00:31:17.306 11:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.306 11:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:17.306 11:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.306 11:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:17.306 Delay0 00:31:17.306 11:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.306 11:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:17.306 11:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.306 11:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:17.306 11:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.306 11:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2948679 00:31:17.567 11:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:31:17.567 11:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:17.568 [2024-11-20 11:32:10.126763] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:19.484 11:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:19.484 11:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.484 11:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 starting I/O failed: -6 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 starting I/O failed: -6 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 starting I/O failed: -6 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 starting I/O failed: -6 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 starting I/O failed: -6 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 starting I/O failed: -6 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 starting I/O failed: -6 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 starting I/O failed: -6 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 starting I/O failed: -6 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 starting I/O failed: -6 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 starting I/O failed: -6 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 starting I/O failed: -6 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 [2024-11-20 11:32:12.208943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b680 is same with the state(6) to be set 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 starting I/O failed: -6 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 starting I/O failed: -6 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 starting I/O failed: -6 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 starting I/O failed: -6 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 starting I/O failed: -6 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 starting I/O failed: -6 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 starting I/O failed: -6 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 starting I/O failed: -6 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 starting I/O failed: -6 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 starting I/O failed: -6 00:31:19.484 Write completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 Read completed with error (sct=0, sc=8) 00:31:19.484 starting I/O failed: -6 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 [2024-11-20 11:32:12.212335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4bfc000c40 is same with the state(6) to be set 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Write completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Write completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Write completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Write completed with error (sct=0, sc=8) 00:31:19.485 Write completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Write completed with error (sct=0, sc=8) 00:31:19.485 Write completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Write completed with error (sct=0, sc=8) 00:31:19.485 Write completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Write completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Write completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Write completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Read completed with error (sct=0, sc=8) 00:31:19.485 Write completed with error (sct=0, sc=8) 00:31:20.869 [2024-11-20 11:32:13.184214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2c9a0 is same with the state(6) to be set 00:31:20.869 Write completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Write completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Write completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Write completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Write completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Write completed with error (sct=0, sc=8) 00:31:20.869 Write completed with error (sct=0, sc=8) 00:31:20.869 Write completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Write completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Write completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Write completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 [2024-11-20 11:32:13.212852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b4a0 is same with the state(6) to be set 00:31:20.869 Write completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Write completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Write completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Write completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Write completed with error (sct=0, sc=8) 00:31:20.869 Write completed with error (sct=0, sc=8) 00:31:20.869 Write completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Write completed with error (sct=0, sc=8) 00:31:20.869 Write completed with error (sct=0, sc=8) 00:31:20.869 [2024-11-20 11:32:13.213459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2b860 is same with the state(6) to be set 00:31:20.869 Write completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Write completed with error (sct=0, sc=8) 00:31:20.869 Write completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Write completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Write completed with error (sct=0, sc=8) 00:31:20.869 Write completed with error (sct=0, sc=8) 00:31:20.869 Write completed with error (sct=0, sc=8) 00:31:20.869 [2024-11-20 11:32:13.215030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4bfc00d020 is same with the state(6) to be set 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Write completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Write completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Write completed with error (sct=0, sc=8) 00:31:20.869 Write completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Write completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Write completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 Read completed with error (sct=0, sc=8) 00:31:20.869 [2024-11-20 11:32:13.215142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4bfc00d7c0 is same with the state(6) to be set 00:31:20.869 Initializing NVMe Controllers 00:31:20.869 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:20.869 Controller IO queue size 128, less than required. 00:31:20.869 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:20.869 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:20.869 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:20.869 Initialization complete. Launching workers. 00:31:20.869 ======================================================== 00:31:20.869 Latency(us) 00:31:20.869 Device Information : IOPS MiB/s Average min max 00:31:20.869 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 177.09 0.09 880337.98 396.42 1008237.27 00:31:20.869 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 165.15 0.08 905561.24 333.68 1011518.58 00:31:20.869 ======================================================== 00:31:20.869 Total : 342.24 0.17 892509.67 333.68 1011518.58 00:31:20.869 00:31:20.869 [2024-11-20 11:32:13.215670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2c9a0 (9): Bad file descriptor 00:31:20.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:31:20.869 11:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.869 11:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:31:20.869 11:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2948679 00:31:20.869 11:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:31:21.131 11:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:31:21.131 11:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2948679 00:31:21.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2948679) - No such process 00:31:21.131 11:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2948679 00:31:21.131 11:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:31:21.131 11:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2948679 00:31:21.131 11:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:31:21.131 11:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:21.131 11:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:31:21.131 11:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:21.131 11:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2948679 00:31:21.131 11:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:31:21.131 11:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:21.131 11:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:21.131 11:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:21.131 11:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:21.131 11:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.131 11:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:21.131 11:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.131 11:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:21.131 11:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.131 11:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:21.131 [2024-11-20 11:32:13.750042] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:21.131 11:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.131 11:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:21.131 11:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:21.131 11:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:21.131 11:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:21.131 11:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2949364 00:31:21.131 11:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:31:21.131 11:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2949364 00:31:21.131 11:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:21.131 11:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:21.131 [2024-11-20 11:32:13.849177] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:21.703 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:21.703 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2949364 00:31:21.703 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:22.274 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:22.274 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2949364 00:31:22.274 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:22.850 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:22.850 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2949364 00:31:22.850 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:23.111 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:23.111 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2949364 00:31:23.111 11:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:23.683 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:23.683 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2949364 00:31:23.683 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:24.253 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:24.253 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2949364 00:31:24.253 11:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:24.513 Initializing NVMe Controllers 00:31:24.514 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:24.514 Controller IO queue size 128, less than required. 00:31:24.514 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:24.514 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:24.514 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:24.514 Initialization complete. Launching workers. 00:31:24.514 ======================================================== 00:31:24.514 Latency(us) 00:31:24.514 Device Information : IOPS MiB/s Average min max 00:31:24.514 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002100.18 1000157.27 1005917.81 00:31:24.514 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003978.82 1000206.83 1010602.31 00:31:24.514 ======================================================== 00:31:24.514 Total : 256.00 0.12 1003039.50 1000157.27 1010602.31 00:31:24.514 00:31:24.775 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:24.775 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2949364 00:31:24.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2949364) - No such process 00:31:24.775 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2949364 00:31:24.775 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:31:24.775 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:31:24.775 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:24.775 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:31:24.775 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:24.775 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:31:24.775 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:24.775 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:24.775 rmmod nvme_tcp 00:31:24.775 rmmod nvme_fabrics 00:31:24.775 rmmod nvme_keyring 00:31:24.775 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:24.775 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:31:24.775 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:31:24.775 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2948339 ']' 00:31:24.775 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2948339 00:31:24.775 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2948339 ']' 00:31:24.775 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2948339 00:31:24.775 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:31:24.775 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:24.775 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2948339 00:31:24.775 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:24.775 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:24.775 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2948339' 00:31:24.775 killing process with pid 2948339 00:31:24.775 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2948339 00:31:24.775 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2948339 00:31:25.037 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:25.037 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:25.037 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:25.037 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:31:25.037 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:31:25.037 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:25.037 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:31:25.037 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:25.037 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:25.037 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.037 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:25.037 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:26.951 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:26.951 00:31:26.951 real 0m18.276s 00:31:26.951 user 0m26.358s 00:31:26.951 sys 0m7.583s 00:31:26.951 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:26.951 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:26.951 ************************************ 00:31:26.951 END TEST nvmf_delete_subsystem 00:31:26.951 ************************************ 00:31:26.951 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:26.951 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:26.951 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:26.951 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:27.214 ************************************ 00:31:27.214 START TEST nvmf_host_management 00:31:27.214 ************************************ 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:27.214 * Looking for test storage... 00:31:27.214 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:27.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.214 --rc genhtml_branch_coverage=1 00:31:27.214 --rc genhtml_function_coverage=1 00:31:27.214 --rc genhtml_legend=1 00:31:27.214 --rc geninfo_all_blocks=1 00:31:27.214 --rc geninfo_unexecuted_blocks=1 00:31:27.214 00:31:27.214 ' 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:27.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.214 --rc genhtml_branch_coverage=1 00:31:27.214 --rc genhtml_function_coverage=1 00:31:27.214 --rc genhtml_legend=1 00:31:27.214 --rc geninfo_all_blocks=1 00:31:27.214 --rc geninfo_unexecuted_blocks=1 00:31:27.214 00:31:27.214 ' 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:27.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.214 --rc genhtml_branch_coverage=1 00:31:27.214 --rc genhtml_function_coverage=1 00:31:27.214 --rc genhtml_legend=1 00:31:27.214 --rc geninfo_all_blocks=1 00:31:27.214 --rc geninfo_unexecuted_blocks=1 00:31:27.214 00:31:27.214 ' 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:27.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.214 --rc genhtml_branch_coverage=1 00:31:27.214 --rc genhtml_function_coverage=1 00:31:27.214 --rc genhtml_legend=1 00:31:27.214 --rc geninfo_all_blocks=1 00:31:27.214 --rc geninfo_unexecuted_blocks=1 00:31:27.214 00:31:27.214 ' 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:27.214 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:27.215 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.215 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.215 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.215 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:31:27.215 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.215 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:31:27.215 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:27.215 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:27.215 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:27.215 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:27.215 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:27.215 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:27.215 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:27.215 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:27.215 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:27.215 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:27.477 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:27.477 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:27.477 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:31:27.477 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:27.477 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:27.477 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:27.477 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:27.477 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:27.477 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:27.477 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:27.477 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.477 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:27.477 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:27.477 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:31:27.477 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:35.627 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:35.627 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:31:35.627 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:35.627 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:35.627 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:35.627 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:35.627 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:35.627 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:31:35.627 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:35.627 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:31:35.627 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:31:35.627 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:31:35.627 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:31:35.627 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:31:35.627 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:31:35.627 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:35.627 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:35.627 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:35.627 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:35.627 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:35.627 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:35.627 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:35.627 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:35.627 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:35.627 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:35.628 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:35.628 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:35.628 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:35.628 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:35.628 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:35.628 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:31:35.628 00:31:35.628 --- 10.0.0.2 ping statistics --- 00:31:35.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:35.628 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:35.628 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:35.628 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:31:35.628 00:31:35.628 --- 10.0.0.1 ping statistics --- 00:31:35.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:35.628 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:35.628 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:35.629 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2954075 00:31:35.629 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2954075 00:31:35.629 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:31:35.629 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2954075 ']' 00:31:35.629 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:35.629 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:35.629 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:35.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:35.629 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:35.629 11:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:35.629 [2024-11-20 11:32:27.537932] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:35.629 [2024-11-20 11:32:27.539081] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:31:35.629 [2024-11-20 11:32:27.539135] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:35.629 [2024-11-20 11:32:27.641114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:35.629 [2024-11-20 11:32:27.695139] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:35.629 [2024-11-20 11:32:27.695206] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:35.629 [2024-11-20 11:32:27.695219] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:35.629 [2024-11-20 11:32:27.695231] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:35.629 [2024-11-20 11:32:27.695237] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:35.629 [2024-11-20 11:32:27.697207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:35.629 [2024-11-20 11:32:27.697430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:35.629 [2024-11-20 11:32:27.697588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:35.629 [2024-11-20 11:32:27.697589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:35.629 [2024-11-20 11:32:27.774594] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:35.629 [2024-11-20 11:32:27.775900] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:35.629 [2024-11-20 11:32:27.776116] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:35.629 [2024-11-20 11:32:27.776509] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:35.629 [2024-11-20 11:32:27.776551] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:35.629 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:35.629 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:31:35.629 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:35.629 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:35.629 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:35.891 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:35.891 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:35.892 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.892 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:35.892 [2024-11-20 11:32:28.410474] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:35.892 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.892 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:31:35.892 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:35.892 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:35.892 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:35.892 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:31:35.892 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:31:35.892 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.892 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:35.892 Malloc0 00:31:35.892 [2024-11-20 11:32:28.506790] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:35.892 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.892 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:31:35.892 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:35.892 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:35.892 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2954406 00:31:35.892 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2954406 /var/tmp/bdevperf.sock 00:31:35.892 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2954406 ']' 00:31:35.892 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:35.892 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:35.892 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:35.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:35.892 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:35.892 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:35.892 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:31:35.892 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:35.892 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:35.892 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:35.892 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:35.892 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:35.892 { 00:31:35.892 "params": { 00:31:35.892 "name": "Nvme$subsystem", 00:31:35.892 "trtype": "$TEST_TRANSPORT", 00:31:35.892 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:35.892 "adrfam": "ipv4", 00:31:35.892 "trsvcid": "$NVMF_PORT", 00:31:35.892 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:35.892 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:35.892 "hdgst": ${hdgst:-false}, 00:31:35.892 "ddgst": ${ddgst:-false} 00:31:35.892 }, 00:31:35.892 "method": "bdev_nvme_attach_controller" 00:31:35.892 } 00:31:35.892 EOF 00:31:35.892 )") 00:31:35.892 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:35.892 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:35.892 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:35.892 11:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:35.892 "params": { 00:31:35.892 "name": "Nvme0", 00:31:35.892 "trtype": "tcp", 00:31:35.892 "traddr": "10.0.0.2", 00:31:35.892 "adrfam": "ipv4", 00:31:35.892 "trsvcid": "4420", 00:31:35.892 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:35.892 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:35.892 "hdgst": false, 00:31:35.892 "ddgst": false 00:31:35.892 }, 00:31:35.892 "method": "bdev_nvme_attach_controller" 00:31:35.892 }' 00:31:35.892 [2024-11-20 11:32:28.616059] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:31:35.892 [2024-11-20 11:32:28.616131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2954406 ] 00:31:36.153 [2024-11-20 11:32:28.710018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:36.153 [2024-11-20 11:32:28.763305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.414 Running I/O for 10 seconds... 00:31:36.988 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:36.988 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:31:36.988 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:36.988 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.988 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:36.988 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.988 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:36.988 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:31:36.988 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:36.988 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:31:36.988 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:31:36.988 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:31:36.988 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:31:36.988 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:31:36.988 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:31:36.988 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:31:36.988 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.988 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:36.988 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.988 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=888 00:31:36.988 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 888 -ge 100 ']' 00:31:36.988 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:31:36.988 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:31:36.988 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:31:36.988 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:36.988 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.988 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:36.988 [2024-11-20 11:32:29.523469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c82a0 is same with the state(6) to be set 00:31:36.988 [2024-11-20 11:32:29.523552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c82a0 is same with the state(6) to be set 00:31:36.988 [2024-11-20 11:32:29.523565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c82a0 is same with the state(6) to be set 00:31:36.988 [2024-11-20 11:32:29.523581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c82a0 is same with the state(6) to be set 00:31:36.988 [2024-11-20 11:32:29.523589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c82a0 is same with the state(6) to be set 00:31:36.988 [2024-11-20 11:32:29.523597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c82a0 is same with the state(6) to be set 00:31:36.988 [2024-11-20 11:32:29.523605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c82a0 is same with the state(6) to be set 00:31:36.988 [2024-11-20 11:32:29.523612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c82a0 is same with the state(6) to be set 00:31:36.988 [2024-11-20 11:32:29.523620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c82a0 is same with the state(6) to be set 00:31:36.988 [2024-11-20 11:32:29.523627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c82a0 is same with the state(6) to be set 00:31:36.988 [2024-11-20 11:32:29.523635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c82a0 is same with the state(6) to be set 00:31:36.988 [2024-11-20 11:32:29.523642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c82a0 is same with the state(6) to be set 00:31:36.988 [2024-11-20 11:32:29.523649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c82a0 is same with the state(6) to be set 00:31:36.988 [2024-11-20 11:32:29.523657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c82a0 is same with the state(6) to be set 00:31:36.988 [2024-11-20 11:32:29.523665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c82a0 is same with the state(6) to be set 00:31:36.988 [2024-11-20 11:32:29.523673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c82a0 is same with the state(6) to be set 00:31:36.988 [2024-11-20 11:32:29.523680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c82a0 is same with the state(6) to be set 00:31:36.988 [2024-11-20 11:32:29.523687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c82a0 is same with the state(6) to be set 00:31:36.988 [2024-11-20 11:32:29.523697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c82a0 is same with the state(6) to be set 00:31:36.988 [2024-11-20 11:32:29.523705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c82a0 is same with the state(6) to be set 00:31:36.988 [2024-11-20 11:32:29.523712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c82a0 is same with the state(6) to be set 00:31:36.989 [2024-11-20 11:32:29.523719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15c82a0 is same with the state(6) to be set 00:31:36.989 [2024-11-20 11:32:29.523909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.989 [2024-11-20 11:32:29.523965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.989 [2024-11-20 11:32:29.523988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.989 [2024-11-20 11:32:29.523997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.989 [2024-11-20 11:32:29.524008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.989 [2024-11-20 11:32:29.524016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.989 [2024-11-20 11:32:29.524026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.989 [2024-11-20 11:32:29.524045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.989 [2024-11-20 11:32:29.524055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.989 [2024-11-20 11:32:29.524063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.989 [2024-11-20 11:32:29.524073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.989 [2024-11-20 11:32:29.524082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.989 [2024-11-20 11:32:29.524092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.989 [2024-11-20 11:32:29.524100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.989 [2024-11-20 11:32:29.524110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.989 [2024-11-20 11:32:29.524117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.989 [2024-11-20 11:32:29.524128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.989 [2024-11-20 11:32:29.524136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.989 [2024-11-20 11:32:29.524146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.989 [2024-11-20 11:32:29.524154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.989 [2024-11-20 11:32:29.524178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.989 [2024-11-20 11:32:29.524186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.989 [2024-11-20 11:32:29.524196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.989 [2024-11-20 11:32:29.524204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.989 [2024-11-20 11:32:29.524214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.989 [2024-11-20 11:32:29.524222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.989 [2024-11-20 11:32:29.524232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.989 [2024-11-20 11:32:29.524240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.989 [2024-11-20 11:32:29.524250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.989 [2024-11-20 11:32:29.524258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.989 [2024-11-20 11:32:29.524268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.989 [2024-11-20 11:32:29.524277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.989 [2024-11-20 11:32:29.524289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.989 [2024-11-20 11:32:29.524298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.989 [2024-11-20 11:32:29.524308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.989 [2024-11-20 11:32:29.524315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.989 [2024-11-20 11:32:29.524325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.989 [2024-11-20 11:32:29.524332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.989 [2024-11-20 11:32:29.524343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.989 [2024-11-20 11:32:29.524351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.989 [2024-11-20 11:32:29.524362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.989 [2024-11-20 11:32:29.524371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.989 [2024-11-20 11:32:29.524381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.989 [2024-11-20 11:32:29.524389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.989 [2024-11-20 11:32:29.524399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.989 [2024-11-20 11:32:29.524408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.989 [2024-11-20 11:32:29.524418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.989 [2024-11-20 11:32:29.524426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.989 [2024-11-20 11:32:29.524436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.989 [2024-11-20 11:32:29.524444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.989 [2024-11-20 11:32:29.524454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.989 [2024-11-20 11:32:29.524462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.989 [2024-11-20 11:32:29.524471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.989 [2024-11-20 11:32:29.524479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.989 [2024-11-20 11:32:29.524488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.989 [2024-11-20 11:32:29.524495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.989 [2024-11-20 11:32:29.524505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.989 [2024-11-20 11:32:29.524515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.989 [2024-11-20 11:32:29.524525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.989 [2024-11-20 11:32:29.524533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.989 [2024-11-20 11:32:29.524542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.989 [2024-11-20 11:32:29.524550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.989 [2024-11-20 11:32:29.524561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.989 [2024-11-20 11:32:29.524569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.989 [2024-11-20 11:32:29.524578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.990 [2024-11-20 11:32:29.524587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.990 [2024-11-20 11:32:29.524597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.990 [2024-11-20 11:32:29.524605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.990 [2024-11-20 11:32:29.524615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.990 [2024-11-20 11:32:29.524623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.990 [2024-11-20 11:32:29.524634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.990 [2024-11-20 11:32:29.524642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.990 [2024-11-20 11:32:29.524652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.990 [2024-11-20 11:32:29.524660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.990 [2024-11-20 11:32:29.524671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.990 [2024-11-20 11:32:29.524678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.990 [2024-11-20 11:32:29.524688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.990 [2024-11-20 11:32:29.524696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.990 [2024-11-20 11:32:29.524706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.990 [2024-11-20 11:32:29.524714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.990 [2024-11-20 11:32:29.524724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.990 [2024-11-20 11:32:29.524732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.990 [2024-11-20 11:32:29.524744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.990 [2024-11-20 11:32:29.524751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.990 [2024-11-20 11:32:29.524762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.990 [2024-11-20 11:32:29.524769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.990 [2024-11-20 11:32:29.524780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.990 [2024-11-20 11:32:29.524787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.990 [2024-11-20 11:32:29.524796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.990 [2024-11-20 11:32:29.524804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.990 [2024-11-20 11:32:29.524814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.990 [2024-11-20 11:32:29.524821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.990 [2024-11-20 11:32:29.524830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.990 [2024-11-20 11:32:29.524838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.990 [2024-11-20 11:32:29.524848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.990 [2024-11-20 11:32:29.524855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.990 [2024-11-20 11:32:29.524865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.990 [2024-11-20 11:32:29.524872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.990 [2024-11-20 11:32:29.524882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.990 [2024-11-20 11:32:29.524889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.990 [2024-11-20 11:32:29.524899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.990 [2024-11-20 11:32:29.524906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.990 [2024-11-20 11:32:29.524916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.990 [2024-11-20 11:32:29.524923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.990 [2024-11-20 11:32:29.524933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.990 [2024-11-20 11:32:29.524940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.990 [2024-11-20 11:32:29.524949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.990 [2024-11-20 11:32:29.524959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.990 [2024-11-20 11:32:29.524968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.990 [2024-11-20 11:32:29.524976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.990 [2024-11-20 11:32:29.524985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.990 [2024-11-20 11:32:29.524993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.990 [2024-11-20 11:32:29.525002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.990 [2024-11-20 11:32:29.525010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.990 [2024-11-20 11:32:29.525019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.990 [2024-11-20 11:32:29.525027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.990 [2024-11-20 11:32:29.525037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.990 [2024-11-20 11:32:29.525045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.990 [2024-11-20 11:32:29.525055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.990 [2024-11-20 11:32:29.525063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.990 [2024-11-20 11:32:29.525072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.990 [2024-11-20 11:32:29.525079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.990 [2024-11-20 11:32:29.525089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.990 [2024-11-20 11:32:29.525097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.990 [2024-11-20 11:32:29.525107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.990 [2024-11-20 11:32:29.525115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.990 [2024-11-20 11:32:29.525124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:36.990 [2024-11-20 11:32:29.525132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.990 [2024-11-20 11:32:29.526453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:36.990 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.990 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:36.990 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.990 task offset: 121856 on job bdev=Nvme0n1 fails 00:31:36.990 00:31:36.990 Latency(us) 00:31:36.990 [2024-11-20T10:32:29.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:36.990 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:36.990 Job: Nvme0n1 ended in about 0.60 seconds with error 00:31:36.990 Verification LBA range: start 0x0 length 0x400 00:31:36.990 Nvme0n1 : 0.60 1594.08 99.63 107.16 0.00 36714.04 3249.49 36263.25 00:31:36.990 [2024-11-20T10:32:29.732Z] =================================================================================================================== 00:31:36.990 [2024-11-20T10:32:29.732Z] Total : 1594.08 99.63 107.16 0.00 36714.04 3249.49 36263.25 00:31:36.990 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:36.990 [2024-11-20 11:32:29.528695] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:36.990 [2024-11-20 11:32:29.528735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16df000 (9): Bad file descriptor 00:31:36.990 [2024-11-20 11:32:29.530467] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:31:36.990 [2024-11-20 11:32:29.530568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:36.990 [2024-11-20 11:32:29.530612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:36.991 [2024-11-20 11:32:29.530633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:31:36.991 [2024-11-20 11:32:29.530642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:31:36.991 [2024-11-20 11:32:29.530651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.991 [2024-11-20 11:32:29.530659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16df000 00:31:36.991 [2024-11-20 11:32:29.530684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16df000 (9): Bad file descriptor 00:31:36.991 [2024-11-20 11:32:29.530699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:36.991 [2024-11-20 11:32:29.530708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:36.991 [2024-11-20 11:32:29.530719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:36.991 [2024-11-20 11:32:29.530730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:36.991 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.991 11:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:31:37.936 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2954406 00:31:37.936 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2954406) - No such process 00:31:37.936 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:31:37.936 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:31:37.936 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:37.936 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:31:37.936 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:37.936 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:37.936 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:37.936 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:37.936 { 00:31:37.936 "params": { 00:31:37.936 "name": "Nvme$subsystem", 00:31:37.936 "trtype": "$TEST_TRANSPORT", 00:31:37.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:37.936 "adrfam": "ipv4", 00:31:37.936 "trsvcid": "$NVMF_PORT", 00:31:37.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:37.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:37.936 "hdgst": ${hdgst:-false}, 00:31:37.936 "ddgst": ${ddgst:-false} 00:31:37.936 }, 00:31:37.936 "method": "bdev_nvme_attach_controller" 00:31:37.936 } 00:31:37.936 EOF 00:31:37.936 )") 00:31:37.936 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:37.936 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:37.936 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:37.936 11:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:37.936 "params": { 00:31:37.936 "name": "Nvme0", 00:31:37.936 "trtype": "tcp", 00:31:37.936 "traddr": "10.0.0.2", 00:31:37.936 "adrfam": "ipv4", 00:31:37.936 "trsvcid": "4420", 00:31:37.936 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:37.936 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:37.936 "hdgst": false, 00:31:37.936 "ddgst": false 00:31:37.936 }, 00:31:37.936 "method": "bdev_nvme_attach_controller" 00:31:37.936 }' 00:31:37.936 [2024-11-20 11:32:30.599908] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:31:37.936 [2024-11-20 11:32:30.599964] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2954760 ] 00:31:38.198 [2024-11-20 11:32:30.687933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:38.198 [2024-11-20 11:32:30.724127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:38.198 Running I/O for 1 seconds... 00:31:39.137 1728.00 IOPS, 108.00 MiB/s 00:31:39.137 Latency(us) 00:31:39.137 [2024-11-20T10:32:31.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:39.137 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:39.137 Verification LBA range: start 0x0 length 0x400 00:31:39.137 Nvme0n1 : 1.00 1786.00 111.63 0.00 0.00 35170.52 5761.71 34078.72 00:31:39.137 [2024-11-20T10:32:31.879Z] =================================================================================================================== 00:31:39.137 [2024-11-20T10:32:31.879Z] Total : 1786.00 111.63 0.00 0.00 35170.52 5761.71 34078.72 00:31:39.397 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:31:39.397 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:31:39.397 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:39.397 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:39.397 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:31:39.397 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:39.397 11:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:31:39.397 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:39.397 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:31:39.397 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:39.397 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:39.397 rmmod nvme_tcp 00:31:39.397 rmmod nvme_fabrics 00:31:39.397 rmmod nvme_keyring 00:31:39.397 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:39.397 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:31:39.397 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:31:39.397 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2954075 ']' 00:31:39.397 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2954075 00:31:39.397 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2954075 ']' 00:31:39.397 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2954075 00:31:39.397 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:31:39.397 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:39.397 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2954075 00:31:39.686 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:39.686 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:39.686 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2954075' 00:31:39.686 killing process with pid 2954075 00:31:39.686 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2954075 00:31:39.686 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2954075 00:31:39.686 [2024-11-20 11:32:32.232377] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:31:39.686 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:39.686 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:39.686 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:39.686 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:31:39.686 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:31:39.686 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:39.686 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:31:39.686 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:39.686 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:39.686 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:39.686 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:39.686 11:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:41.597 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:41.858 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:31:41.858 00:31:41.858 real 0m14.617s 00:31:41.858 user 0m18.935s 00:31:41.858 sys 0m7.539s 00:31:41.858 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:41.858 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:41.858 ************************************ 00:31:41.858 END TEST nvmf_host_management 00:31:41.858 ************************************ 00:31:41.858 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:41.858 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:41.858 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:41.858 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:41.858 ************************************ 00:31:41.858 START TEST nvmf_lvol 00:31:41.858 ************************************ 00:31:41.858 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:41.858 * Looking for test storage... 00:31:41.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:41.858 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:41.858 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:31:41.858 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:42.120 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:42.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:42.121 --rc genhtml_branch_coverage=1 00:31:42.121 --rc genhtml_function_coverage=1 00:31:42.121 --rc genhtml_legend=1 00:31:42.121 --rc geninfo_all_blocks=1 00:31:42.121 --rc geninfo_unexecuted_blocks=1 00:31:42.121 00:31:42.121 ' 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:42.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:42.121 --rc genhtml_branch_coverage=1 00:31:42.121 --rc genhtml_function_coverage=1 00:31:42.121 --rc genhtml_legend=1 00:31:42.121 --rc geninfo_all_blocks=1 00:31:42.121 --rc geninfo_unexecuted_blocks=1 00:31:42.121 00:31:42.121 ' 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:42.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:42.121 --rc genhtml_branch_coverage=1 00:31:42.121 --rc genhtml_function_coverage=1 00:31:42.121 --rc genhtml_legend=1 00:31:42.121 --rc geninfo_all_blocks=1 00:31:42.121 --rc geninfo_unexecuted_blocks=1 00:31:42.121 00:31:42.121 ' 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:42.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:42.121 --rc genhtml_branch_coverage=1 00:31:42.121 --rc genhtml_function_coverage=1 00:31:42.121 --rc genhtml_legend=1 00:31:42.121 --rc geninfo_all_blocks=1 00:31:42.121 --rc geninfo_unexecuted_blocks=1 00:31:42.121 00:31:42.121 ' 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:42.121 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:42.122 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:42.122 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:42.122 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:42.122 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:42.122 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:42.122 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:42.122 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:31:42.122 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:31:42.122 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:42.122 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:31:42.122 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:42.122 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:42.122 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:42.122 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:42.122 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:42.122 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:42.122 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:42.122 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:42.122 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:42.122 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:42.122 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:31:42.122 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:50.262 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:50.262 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:50.262 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:50.262 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:50.263 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:50.263 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:50.263 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:50.263 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:50.263 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:50.263 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:50.263 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:50.263 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:50.263 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:50.263 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:50.263 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:31:50.263 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:50.263 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:50.263 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:50.263 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:50.263 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:50.263 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:50.263 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:50.263 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:50.263 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:50.263 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:50.263 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:50.263 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:50.263 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:50.263 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:50.263 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:50.263 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:50.263 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:50.263 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:50.263 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:50.263 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:50.263 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:50.263 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:50.263 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:50.263 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:50.263 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:50.263 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:50.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:50.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:31:50.263 00:31:50.263 --- 10.0.0.2 ping statistics --- 00:31:50.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:50.263 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:31:50.263 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:50.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:50.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:31:50.263 00:31:50.263 --- 10.0.0.1 ping statistics --- 00:31:50.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:50.263 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:31:50.263 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:50.263 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:31:50.263 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:50.263 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:50.263 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:50.263 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:50.263 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:50.263 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:50.263 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:50.263 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:31:50.263 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:50.263 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:50.263 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:50.263 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2959198 00:31:50.263 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2959198 00:31:50.263 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:31:50.263 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2959198 ']' 00:31:50.263 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:50.263 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:50.263 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:50.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:50.263 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:50.263 11:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:50.263 [2024-11-20 11:32:42.208097] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:50.263 [2024-11-20 11:32:42.209221] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:31:50.263 [2024-11-20 11:32:42.209271] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:50.263 [2024-11-20 11:32:42.310036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:50.263 [2024-11-20 11:32:42.362345] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:50.263 [2024-11-20 11:32:42.362399] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:50.263 [2024-11-20 11:32:42.362408] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:50.263 [2024-11-20 11:32:42.362415] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:50.263 [2024-11-20 11:32:42.362422] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:50.263 [2024-11-20 11:32:42.364508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:50.263 [2024-11-20 11:32:42.364667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:50.263 [2024-11-20 11:32:42.364667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:50.263 [2024-11-20 11:32:42.441022] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:50.263 [2024-11-20 11:32:42.441896] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:50.263 [2024-11-20 11:32:42.442303] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:50.263 [2024-11-20 11:32:42.442471] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:50.525 11:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:50.525 11:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:31:50.525 11:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:50.525 11:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:50.525 11:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:50.525 11:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:50.525 11:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:50.525 [2024-11-20 11:32:43.225625] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:50.785 11:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:50.785 11:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:31:50.785 11:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:51.046 11:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:31:51.046 11:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:31:51.307 11:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:31:51.568 11:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=30edbdcb-aaec-49fd-8ea9-1c31dd4a43d2 00:31:51.568 11:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 30edbdcb-aaec-49fd-8ea9-1c31dd4a43d2 lvol 20 00:31:51.568 11:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=2549f439-0f61-4afd-aef0-23031c7e2ab8 00:31:51.568 11:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:51.828 11:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2549f439-0f61-4afd-aef0-23031c7e2ab8 00:31:52.088 11:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:52.088 [2024-11-20 11:32:44.817506] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:52.348 11:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:52.348 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2959799 00:31:52.348 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:31:52.349 11:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:31:53.732 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 2549f439-0f61-4afd-aef0-23031c7e2ab8 MY_SNAPSHOT 00:31:53.732 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=0d3ba367-9ae2-4495-9003-55d0b2c82ca0 00:31:53.732 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 2549f439-0f61-4afd-aef0-23031c7e2ab8 30 00:31:53.992 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 0d3ba367-9ae2-4495-9003-55d0b2c82ca0 MY_CLONE 00:31:54.253 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b24c85fd-e2ea-493e-a3d2-6f592e9e9b6c 00:31:54.253 11:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate b24c85fd-e2ea-493e-a3d2-6f592e9e9b6c 00:31:54.514 11:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2959799 00:32:04.513 Initializing NVMe Controllers 00:32:04.513 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:04.513 Controller IO queue size 128, less than required. 00:32:04.513 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:04.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:32:04.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:32:04.513 Initialization complete. Launching workers. 00:32:04.513 ======================================================== 00:32:04.513 Latency(us) 00:32:04.513 Device Information : IOPS MiB/s Average min max 00:32:04.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15357.20 59.99 8337.57 4345.18 67571.70 00:32:04.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15275.90 59.67 8381.27 4054.45 81048.62 00:32:04.513 ======================================================== 00:32:04.513 Total : 30633.10 119.66 8359.36 4054.45 81048.62 00:32:04.513 00:32:04.513 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:04.513 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2549f439-0f61-4afd-aef0-23031c7e2ab8 00:32:04.513 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 30edbdcb-aaec-49fd-8ea9-1c31dd4a43d2 00:32:04.513 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:32:04.513 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:32:04.513 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:32:04.513 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:04.513 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:32:04.513 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:04.513 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:32:04.514 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:04.514 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:04.514 rmmod nvme_tcp 00:32:04.514 rmmod nvme_fabrics 00:32:04.514 rmmod nvme_keyring 00:32:04.514 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:04.514 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:32:04.514 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:32:04.514 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2959198 ']' 00:32:04.514 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2959198 00:32:04.514 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2959198 ']' 00:32:04.514 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2959198 00:32:04.514 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:32:04.514 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:04.514 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2959198 00:32:04.514 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:04.514 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:04.514 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2959198' 00:32:04.514 killing process with pid 2959198 00:32:04.514 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2959198 00:32:04.514 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2959198 00:32:04.514 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:04.514 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:04.514 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:04.514 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:32:04.514 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:32:04.514 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:04.514 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:32:04.514 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:04.514 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:04.514 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:04.514 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:04.514 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:05.901 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:05.901 00:32:05.901 real 0m24.058s 00:32:05.901 user 0m56.420s 00:32:05.901 sys 0m10.987s 00:32:05.901 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:05.901 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:05.901 ************************************ 00:32:05.901 END TEST nvmf_lvol 00:32:05.901 ************************************ 00:32:05.901 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:05.901 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:05.901 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:05.901 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:05.901 ************************************ 00:32:05.901 START TEST nvmf_lvs_grow 00:32:05.901 ************************************ 00:32:05.901 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:06.164 * Looking for test storage... 00:32:06.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:06.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.164 --rc genhtml_branch_coverage=1 00:32:06.164 --rc genhtml_function_coverage=1 00:32:06.164 --rc genhtml_legend=1 00:32:06.164 --rc geninfo_all_blocks=1 00:32:06.164 --rc geninfo_unexecuted_blocks=1 00:32:06.164 00:32:06.164 ' 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:06.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.164 --rc genhtml_branch_coverage=1 00:32:06.164 --rc genhtml_function_coverage=1 00:32:06.164 --rc genhtml_legend=1 00:32:06.164 --rc geninfo_all_blocks=1 00:32:06.164 --rc geninfo_unexecuted_blocks=1 00:32:06.164 00:32:06.164 ' 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:06.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.164 --rc genhtml_branch_coverage=1 00:32:06.164 --rc genhtml_function_coverage=1 00:32:06.164 --rc genhtml_legend=1 00:32:06.164 --rc geninfo_all_blocks=1 00:32:06.164 --rc geninfo_unexecuted_blocks=1 00:32:06.164 00:32:06.164 ' 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:06.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.164 --rc genhtml_branch_coverage=1 00:32:06.164 --rc genhtml_function_coverage=1 00:32:06.164 --rc genhtml_legend=1 00:32:06.164 --rc geninfo_all_blocks=1 00:32:06.164 --rc geninfo_unexecuted_blocks=1 00:32:06.164 00:32:06.164 ' 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:06.164 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:06.165 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.165 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.165 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.165 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:32:06.165 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.165 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:32:06.165 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:06.165 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:06.165 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:06.165 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:06.165 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:06.165 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:06.165 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:06.165 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:06.165 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:06.165 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:06.165 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:06.165 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:06.165 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:32:06.165 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:06.165 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:06.165 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:06.165 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:06.165 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:06.165 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:06.165 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:06.165 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:06.165 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:06.165 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:06.165 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:32:06.165 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:14.308 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:14.308 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:14.308 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:14.308 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:14.308 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:14.308 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:14.308 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:14.308 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:14.308 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:14.308 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:14.308 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:14.308 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:14.308 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:14.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:14.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:32:14.308 00:32:14.308 --- 10.0.0.2 ping statistics --- 00:32:14.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:14.308 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:32:14.308 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:14.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:14.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:32:14.308 00:32:14.308 --- 10.0.0.1 ping statistics --- 00:32:14.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:14.308 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:32:14.308 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:14.308 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:32:14.308 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:14.308 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:14.308 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:14.308 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:14.308 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:14.308 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:14.308 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:14.308 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:32:14.308 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:14.308 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:14.308 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:14.308 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2966238 00:32:14.308 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2966238 00:32:14.308 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:14.308 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2966238 ']' 00:32:14.308 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:14.308 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:14.308 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:14.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:14.308 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:14.308 11:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:14.308 [2024-11-20 11:33:06.292471] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:14.308 [2024-11-20 11:33:06.293913] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:32:14.308 [2024-11-20 11:33:06.293979] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:14.308 [2024-11-20 11:33:06.392698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:14.308 [2024-11-20 11:33:06.444145] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:14.308 [2024-11-20 11:33:06.444209] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:14.308 [2024-11-20 11:33:06.444218] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:14.308 [2024-11-20 11:33:06.444225] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:14.308 [2024-11-20 11:33:06.444231] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:14.308 [2024-11-20 11:33:06.445002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:14.308 [2024-11-20 11:33:06.521026] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:14.308 [2024-11-20 11:33:06.521327] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:14.569 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:14.569 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:32:14.569 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:14.569 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:14.569 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:14.569 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:14.569 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:14.829 [2024-11-20 11:33:07.313868] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:14.829 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:32:14.829 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:14.829 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:14.829 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:14.829 ************************************ 00:32:14.829 START TEST lvs_grow_clean 00:32:14.829 ************************************ 00:32:14.829 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:32:14.829 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:14.829 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:14.829 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:14.829 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:14.829 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:14.829 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:14.829 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:14.829 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:14.829 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:15.106 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:15.106 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:15.106 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=5aaf0c09-89a4-4e85-82a9-506031166c5c 00:32:15.106 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5aaf0c09-89a4-4e85-82a9-506031166c5c 00:32:15.106 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:15.367 11:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:15.367 11:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:15.367 11:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5aaf0c09-89a4-4e85-82a9-506031166c5c lvol 150 00:32:15.627 11:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=e50fe35a-868f-44fe-a069-5a77304d346a 00:32:15.627 11:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:15.627 11:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:15.887 [2024-11-20 11:33:08.373556] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:15.887 [2024-11-20 11:33:08.373724] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:15.887 true 00:32:15.887 11:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5aaf0c09-89a4-4e85-82a9-506031166c5c 00:32:15.887 11:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:15.887 11:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:15.887 11:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:16.146 11:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e50fe35a-868f-44fe-a069-5a77304d346a 00:32:16.407 11:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:16.407 [2024-11-20 11:33:09.102250] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:16.407 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:16.668 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:16.668 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2966861 00:32:16.668 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:16.668 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2966861 /var/tmp/bdevperf.sock 00:32:16.668 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2966861 ']' 00:32:16.668 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:16.668 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:16.668 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:16.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:16.668 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:16.668 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:16.668 [2024-11-20 11:33:09.323686] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:32:16.668 [2024-11-20 11:33:09.323754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2966861 ] 00:32:16.927 [2024-11-20 11:33:09.417440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.927 [2024-11-20 11:33:09.472470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:17.498 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:17.498 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:32:17.498 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:18.070 Nvme0n1 00:32:18.070 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:18.070 [ 00:32:18.070 { 00:32:18.070 "name": "Nvme0n1", 00:32:18.070 "aliases": [ 00:32:18.070 "e50fe35a-868f-44fe-a069-5a77304d346a" 00:32:18.070 ], 00:32:18.070 "product_name": "NVMe disk", 00:32:18.070 "block_size": 4096, 00:32:18.070 "num_blocks": 38912, 00:32:18.070 "uuid": "e50fe35a-868f-44fe-a069-5a77304d346a", 00:32:18.070 "numa_id": 0, 00:32:18.070 "assigned_rate_limits": { 00:32:18.070 "rw_ios_per_sec": 0, 00:32:18.070 "rw_mbytes_per_sec": 0, 00:32:18.070 "r_mbytes_per_sec": 0, 00:32:18.070 "w_mbytes_per_sec": 0 00:32:18.070 }, 00:32:18.070 "claimed": false, 00:32:18.070 "zoned": false, 00:32:18.070 "supported_io_types": { 00:32:18.070 "read": true, 00:32:18.070 "write": true, 00:32:18.070 "unmap": true, 00:32:18.070 "flush": true, 00:32:18.070 "reset": true, 00:32:18.070 "nvme_admin": true, 00:32:18.070 "nvme_io": true, 00:32:18.070 "nvme_io_md": false, 00:32:18.070 "write_zeroes": true, 00:32:18.070 "zcopy": false, 00:32:18.070 "get_zone_info": false, 00:32:18.070 "zone_management": false, 00:32:18.070 "zone_append": false, 00:32:18.070 "compare": true, 00:32:18.070 "compare_and_write": true, 00:32:18.070 "abort": true, 00:32:18.070 "seek_hole": false, 00:32:18.070 "seek_data": false, 00:32:18.070 "copy": true, 00:32:18.070 "nvme_iov_md": false 00:32:18.070 }, 00:32:18.070 "memory_domains": [ 00:32:18.070 { 00:32:18.070 "dma_device_id": "system", 00:32:18.070 "dma_device_type": 1 00:32:18.070 } 00:32:18.070 ], 00:32:18.070 "driver_specific": { 00:32:18.070 "nvme": [ 00:32:18.070 { 00:32:18.070 "trid": { 00:32:18.070 "trtype": "TCP", 00:32:18.070 "adrfam": "IPv4", 00:32:18.070 "traddr": "10.0.0.2", 00:32:18.070 "trsvcid": "4420", 00:32:18.070 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:18.070 }, 00:32:18.070 "ctrlr_data": { 00:32:18.070 "cntlid": 1, 00:32:18.070 "vendor_id": "0x8086", 00:32:18.070 "model_number": "SPDK bdev Controller", 00:32:18.070 "serial_number": "SPDK0", 00:32:18.070 "firmware_revision": "25.01", 00:32:18.070 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:18.070 "oacs": { 00:32:18.070 "security": 0, 00:32:18.070 "format": 0, 00:32:18.070 "firmware": 0, 00:32:18.070 "ns_manage": 0 00:32:18.070 }, 00:32:18.070 "multi_ctrlr": true, 00:32:18.070 "ana_reporting": false 00:32:18.070 }, 00:32:18.070 "vs": { 00:32:18.070 "nvme_version": "1.3" 00:32:18.070 }, 00:32:18.070 "ns_data": { 00:32:18.070 "id": 1, 00:32:18.070 "can_share": true 00:32:18.071 } 00:32:18.071 } 00:32:18.071 ], 00:32:18.071 "mp_policy": "active_passive" 00:32:18.071 } 00:32:18.071 } 00:32:18.071 ] 00:32:18.071 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:18.071 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2967356 00:32:18.071 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:18.332 Running I/O for 10 seconds... 00:32:19.275 Latency(us) 00:32:19.275 [2024-11-20T10:33:12.017Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:19.275 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:19.275 Nvme0n1 : 1.00 16447.00 64.25 0.00 0.00 0.00 0.00 0.00 00:32:19.275 [2024-11-20T10:33:12.017Z] =================================================================================================================== 00:32:19.275 [2024-11-20T10:33:12.017Z] Total : 16447.00 64.25 0.00 0.00 0.00 0.00 0.00 00:32:19.275 00:32:20.217 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5aaf0c09-89a4-4e85-82a9-506031166c5c 00:32:20.217 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:20.217 Nvme0n1 : 2.00 16732.50 65.36 0.00 0.00 0.00 0.00 0.00 00:32:20.217 [2024-11-20T10:33:12.959Z] =================================================================================================================== 00:32:20.217 [2024-11-20T10:33:12.959Z] Total : 16732.50 65.36 0.00 0.00 0.00 0.00 0.00 00:32:20.217 00:32:20.217 true 00:32:20.217 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5aaf0c09-89a4-4e85-82a9-506031166c5c 00:32:20.217 11:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:20.478 11:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:20.478 11:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:20.478 11:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2967356 00:32:21.418 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:21.418 Nvme0n1 : 3.00 17018.33 66.48 0.00 0.00 0.00 0.00 0.00 00:32:21.418 [2024-11-20T10:33:14.160Z] =================================================================================================================== 00:32:21.418 [2024-11-20T10:33:14.160Z] Total : 17018.33 66.48 0.00 0.00 0.00 0.00 0.00 00:32:21.418 00:32:22.360 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:22.360 Nvme0n1 : 4.00 17446.75 68.15 0.00 0.00 0.00 0.00 0.00 00:32:22.360 [2024-11-20T10:33:15.102Z] =================================================================================================================== 00:32:22.360 [2024-11-20T10:33:15.102Z] Total : 17446.75 68.15 0.00 0.00 0.00 0.00 0.00 00:32:22.360 00:32:23.303 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:23.303 Nvme0n1 : 5.00 18961.20 74.07 0.00 0.00 0.00 0.00 0.00 00:32:23.303 [2024-11-20T10:33:16.045Z] =================================================================================================================== 00:32:23.303 [2024-11-20T10:33:16.045Z] Total : 18961.20 74.07 0.00 0.00 0.00 0.00 0.00 00:32:23.303 00:32:24.247 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:24.247 Nvme0n1 : 6.00 19981.50 78.05 0.00 0.00 0.00 0.00 0.00 00:32:24.247 [2024-11-20T10:33:16.989Z] =================================================================================================================== 00:32:24.247 [2024-11-20T10:33:16.989Z] Total : 19981.50 78.05 0.00 0.00 0.00 0.00 0.00 00:32:24.247 00:32:25.188 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:25.188 Nvme0n1 : 7.00 20615.86 80.53 0.00 0.00 0.00 0.00 0.00 00:32:25.188 [2024-11-20T10:33:17.930Z] =================================================================================================================== 00:32:25.188 [2024-11-20T10:33:17.931Z] Total : 20615.86 80.53 0.00 0.00 0.00 0.00 0.00 00:32:25.189 00:32:26.129 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:26.129 Nvme0n1 : 8.00 21084.88 82.36 0.00 0.00 0.00 0.00 0.00 00:32:26.129 [2024-11-20T10:33:18.871Z] =================================================================================================================== 00:32:26.129 [2024-11-20T10:33:18.871Z] Total : 21084.88 82.36 0.00 0.00 0.00 0.00 0.00 00:32:26.129 00:32:27.511 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:27.511 Nvme0n1 : 9.00 21453.22 83.80 0.00 0.00 0.00 0.00 0.00 00:32:27.511 [2024-11-20T10:33:20.253Z] =================================================================================================================== 00:32:27.511 [2024-11-20T10:33:20.253Z] Total : 21453.22 83.80 0.00 0.00 0.00 0.00 0.00 00:32:27.511 00:32:28.451 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:28.451 Nvme0n1 : 10.00 21743.10 84.93 0.00 0.00 0.00 0.00 0.00 00:32:28.451 [2024-11-20T10:33:21.193Z] =================================================================================================================== 00:32:28.451 [2024-11-20T10:33:21.193Z] Total : 21743.10 84.93 0.00 0.00 0.00 0.00 0.00 00:32:28.451 00:32:28.451 00:32:28.451 Latency(us) 00:32:28.451 [2024-11-20T10:33:21.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:28.451 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:28.451 Nvme0n1 : 10.00 21744.23 84.94 0.00 0.00 5882.67 1788.59 32331.09 00:32:28.451 [2024-11-20T10:33:21.193Z] =================================================================================================================== 00:32:28.451 [2024-11-20T10:33:21.193Z] Total : 21744.23 84.94 0.00 0.00 5882.67 1788.59 32331.09 00:32:28.451 { 00:32:28.452 "results": [ 00:32:28.452 { 00:32:28.452 "job": "Nvme0n1", 00:32:28.452 "core_mask": "0x2", 00:32:28.452 "workload": "randwrite", 00:32:28.452 "status": "finished", 00:32:28.452 "queue_depth": 128, 00:32:28.452 "io_size": 4096, 00:32:28.452 "runtime": 10.00463, 00:32:28.452 "iops": 21744.23242038936, 00:32:28.452 "mibps": 84.93840789214593, 00:32:28.452 "io_failed": 0, 00:32:28.452 "io_timeout": 0, 00:32:28.452 "avg_latency_us": 5882.667669870631, 00:32:28.452 "min_latency_us": 1788.5866666666666, 00:32:28.452 "max_latency_us": 32331.093333333334 00:32:28.452 } 00:32:28.452 ], 00:32:28.452 "core_count": 1 00:32:28.452 } 00:32:28.452 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2966861 00:32:28.452 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2966861 ']' 00:32:28.452 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2966861 00:32:28.452 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:32:28.452 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:28.452 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2966861 00:32:28.452 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:28.452 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:28.452 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2966861' 00:32:28.452 killing process with pid 2966861 00:32:28.452 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2966861 00:32:28.452 Received shutdown signal, test time was about 10.000000 seconds 00:32:28.452 00:32:28.452 Latency(us) 00:32:28.452 [2024-11-20T10:33:21.194Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:28.452 [2024-11-20T10:33:21.194Z] =================================================================================================================== 00:32:28.452 [2024-11-20T10:33:21.194Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:28.452 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2966861 00:32:28.452 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:28.712 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:28.973 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:28.973 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5aaf0c09-89a4-4e85-82a9-506031166c5c 00:32:28.973 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:28.973 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:32:28.973 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:29.234 [2024-11-20 11:33:21.785624] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:29.234 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5aaf0c09-89a4-4e85-82a9-506031166c5c 00:32:29.234 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:32:29.234 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5aaf0c09-89a4-4e85-82a9-506031166c5c 00:32:29.234 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:29.234 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:29.234 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:29.234 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:29.234 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:29.234 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:29.234 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:29.234 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:29.234 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5aaf0c09-89a4-4e85-82a9-506031166c5c 00:32:29.495 request: 00:32:29.495 { 00:32:29.495 "uuid": "5aaf0c09-89a4-4e85-82a9-506031166c5c", 00:32:29.495 "method": "bdev_lvol_get_lvstores", 00:32:29.495 "req_id": 1 00:32:29.495 } 00:32:29.495 Got JSON-RPC error response 00:32:29.495 response: 00:32:29.495 { 00:32:29.495 "code": -19, 00:32:29.495 "message": "No such device" 00:32:29.495 } 00:32:29.495 11:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:32:29.495 11:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:29.495 11:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:29.495 11:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:29.495 11:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:29.495 aio_bdev 00:32:29.495 11:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e50fe35a-868f-44fe-a069-5a77304d346a 00:32:29.495 11:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=e50fe35a-868f-44fe-a069-5a77304d346a 00:32:29.495 11:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:29.495 11:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:32:29.495 11:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:29.495 11:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:29.495 11:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:29.756 11:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e50fe35a-868f-44fe-a069-5a77304d346a -t 2000 00:32:30.018 [ 00:32:30.018 { 00:32:30.018 "name": "e50fe35a-868f-44fe-a069-5a77304d346a", 00:32:30.018 "aliases": [ 00:32:30.018 "lvs/lvol" 00:32:30.018 ], 00:32:30.018 "product_name": "Logical Volume", 00:32:30.018 "block_size": 4096, 00:32:30.018 "num_blocks": 38912, 00:32:30.018 "uuid": "e50fe35a-868f-44fe-a069-5a77304d346a", 00:32:30.018 "assigned_rate_limits": { 00:32:30.018 "rw_ios_per_sec": 0, 00:32:30.018 "rw_mbytes_per_sec": 0, 00:32:30.018 "r_mbytes_per_sec": 0, 00:32:30.018 "w_mbytes_per_sec": 0 00:32:30.018 }, 00:32:30.018 "claimed": false, 00:32:30.018 "zoned": false, 00:32:30.018 "supported_io_types": { 00:32:30.018 "read": true, 00:32:30.018 "write": true, 00:32:30.018 "unmap": true, 00:32:30.018 "flush": false, 00:32:30.018 "reset": true, 00:32:30.018 "nvme_admin": false, 00:32:30.018 "nvme_io": false, 00:32:30.018 "nvme_io_md": false, 00:32:30.018 "write_zeroes": true, 00:32:30.018 "zcopy": false, 00:32:30.018 "get_zone_info": false, 00:32:30.018 "zone_management": false, 00:32:30.018 "zone_append": false, 00:32:30.018 "compare": false, 00:32:30.018 "compare_and_write": false, 00:32:30.018 "abort": false, 00:32:30.018 "seek_hole": true, 00:32:30.018 "seek_data": true, 00:32:30.018 "copy": false, 00:32:30.018 "nvme_iov_md": false 00:32:30.018 }, 00:32:30.018 "driver_specific": { 00:32:30.018 "lvol": { 00:32:30.018 "lvol_store_uuid": "5aaf0c09-89a4-4e85-82a9-506031166c5c", 00:32:30.018 "base_bdev": "aio_bdev", 00:32:30.018 "thin_provision": false, 00:32:30.018 "num_allocated_clusters": 38, 00:32:30.018 "snapshot": false, 00:32:30.018 "clone": false, 00:32:30.018 "esnap_clone": false 00:32:30.018 } 00:32:30.018 } 00:32:30.018 } 00:32:30.018 ] 00:32:30.018 11:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:32:30.018 11:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5aaf0c09-89a4-4e85-82a9-506031166c5c 00:32:30.018 11:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:30.279 11:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:30.279 11:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5aaf0c09-89a4-4e85-82a9-506031166c5c 00:32:30.279 11:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:30.279 11:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:30.279 11:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e50fe35a-868f-44fe-a069-5a77304d346a 00:32:30.540 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5aaf0c09-89a4-4e85-82a9-506031166c5c 00:32:30.801 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:30.801 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:31.062 00:32:31.062 real 0m16.157s 00:32:31.062 user 0m15.706s 00:32:31.062 sys 0m1.531s 00:32:31.062 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:31.062 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:31.062 ************************************ 00:32:31.062 END TEST lvs_grow_clean 00:32:31.062 ************************************ 00:32:31.062 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:32:31.062 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:31.062 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:31.062 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:31.062 ************************************ 00:32:31.062 START TEST lvs_grow_dirty 00:32:31.062 ************************************ 00:32:31.062 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:32:31.062 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:31.062 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:31.062 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:31.062 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:31.062 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:31.062 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:31.062 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:31.062 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:31.062 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:31.323 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:31.323 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:31.584 11:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=dd10b138-4813-4fb4-8a05-c0c840673336 00:32:31.584 11:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd10b138-4813-4fb4-8a05-c0c840673336 00:32:31.584 11:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:31.584 11:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:31.584 11:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:31.584 11:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u dd10b138-4813-4fb4-8a05-c0c840673336 lvol 150 00:32:31.845 11:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b10b9e41-a12e-4416-bd28-2f21db13d464 00:32:31.845 11:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:31.845 11:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:32.106 [2024-11-20 11:33:24.621556] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:32.106 [2024-11-20 11:33:24.621725] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:32.106 true 00:32:32.106 11:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd10b138-4813-4fb4-8a05-c0c840673336 00:32:32.106 11:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:32.106 11:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:32.106 11:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:32.366 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b10b9e41-a12e-4416-bd28-2f21db13d464 00:32:32.626 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:32.626 [2024-11-20 11:33:25.334151] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:32.626 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:32.887 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2970404 00:32:32.887 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:32.887 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:32.887 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2970404 /var/tmp/bdevperf.sock 00:32:32.887 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2970404 ']' 00:32:32.887 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:32.887 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:32.887 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:32.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:32.887 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:32.887 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:32.887 [2024-11-20 11:33:25.575344] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:32:32.887 [2024-11-20 11:33:25.575417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2970404 ] 00:32:33.148 [2024-11-20 11:33:25.668283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:33.148 [2024-11-20 11:33:25.721142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:33.719 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:33.719 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:32:33.719 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:34.290 Nvme0n1 00:32:34.290 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:34.290 [ 00:32:34.290 { 00:32:34.290 "name": "Nvme0n1", 00:32:34.290 "aliases": [ 00:32:34.290 "b10b9e41-a12e-4416-bd28-2f21db13d464" 00:32:34.290 ], 00:32:34.290 "product_name": "NVMe disk", 00:32:34.290 "block_size": 4096, 00:32:34.290 "num_blocks": 38912, 00:32:34.290 "uuid": "b10b9e41-a12e-4416-bd28-2f21db13d464", 00:32:34.290 "numa_id": 0, 00:32:34.290 "assigned_rate_limits": { 00:32:34.290 "rw_ios_per_sec": 0, 00:32:34.290 "rw_mbytes_per_sec": 0, 00:32:34.290 "r_mbytes_per_sec": 0, 00:32:34.290 "w_mbytes_per_sec": 0 00:32:34.290 }, 00:32:34.290 "claimed": false, 00:32:34.290 "zoned": false, 00:32:34.290 "supported_io_types": { 00:32:34.290 "read": true, 00:32:34.290 "write": true, 00:32:34.290 "unmap": true, 00:32:34.290 "flush": true, 00:32:34.290 "reset": true, 00:32:34.290 "nvme_admin": true, 00:32:34.290 "nvme_io": true, 00:32:34.290 "nvme_io_md": false, 00:32:34.290 "write_zeroes": true, 00:32:34.290 "zcopy": false, 00:32:34.290 "get_zone_info": false, 00:32:34.290 "zone_management": false, 00:32:34.290 "zone_append": false, 00:32:34.290 "compare": true, 00:32:34.290 "compare_and_write": true, 00:32:34.290 "abort": true, 00:32:34.290 "seek_hole": false, 00:32:34.290 "seek_data": false, 00:32:34.290 "copy": true, 00:32:34.290 "nvme_iov_md": false 00:32:34.290 }, 00:32:34.290 "memory_domains": [ 00:32:34.290 { 00:32:34.290 "dma_device_id": "system", 00:32:34.290 "dma_device_type": 1 00:32:34.290 } 00:32:34.290 ], 00:32:34.290 "driver_specific": { 00:32:34.290 "nvme": [ 00:32:34.290 { 00:32:34.290 "trid": { 00:32:34.290 "trtype": "TCP", 00:32:34.290 "adrfam": "IPv4", 00:32:34.290 "traddr": "10.0.0.2", 00:32:34.290 "trsvcid": "4420", 00:32:34.290 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:34.290 }, 00:32:34.290 "ctrlr_data": { 00:32:34.290 "cntlid": 1, 00:32:34.290 "vendor_id": "0x8086", 00:32:34.290 "model_number": "SPDK bdev Controller", 00:32:34.290 "serial_number": "SPDK0", 00:32:34.290 "firmware_revision": "25.01", 00:32:34.290 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:34.290 "oacs": { 00:32:34.290 "security": 0, 00:32:34.290 "format": 0, 00:32:34.290 "firmware": 0, 00:32:34.290 "ns_manage": 0 00:32:34.290 }, 00:32:34.290 "multi_ctrlr": true, 00:32:34.290 "ana_reporting": false 00:32:34.290 }, 00:32:34.290 "vs": { 00:32:34.290 "nvme_version": "1.3" 00:32:34.290 }, 00:32:34.290 "ns_data": { 00:32:34.290 "id": 1, 00:32:34.290 "can_share": true 00:32:34.290 } 00:32:34.290 } 00:32:34.290 ], 00:32:34.290 "mp_policy": "active_passive" 00:32:34.290 } 00:32:34.290 } 00:32:34.290 ] 00:32:34.290 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2970588 00:32:34.290 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:34.290 11:33:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:34.551 Running I/O for 10 seconds... 00:32:35.616 Latency(us) 00:32:35.616 [2024-11-20T10:33:28.358Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:35.616 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:35.616 Nvme0n1 : 1.00 16808.00 65.66 0.00 0.00 0.00 0.00 0.00 00:32:35.616 [2024-11-20T10:33:28.358Z] =================================================================================================================== 00:32:35.616 [2024-11-20T10:33:28.358Z] Total : 16808.00 65.66 0.00 0.00 0.00 0.00 0.00 00:32:35.616 00:32:36.288 11:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u dd10b138-4813-4fb4-8a05-c0c840673336 00:32:36.549 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:36.549 Nvme0n1 : 2.00 17057.00 66.63 0.00 0.00 0.00 0.00 0.00 00:32:36.549 [2024-11-20T10:33:29.291Z] =================================================================================================================== 00:32:36.549 [2024-11-20T10:33:29.291Z] Total : 17057.00 66.63 0.00 0.00 0.00 0.00 0.00 00:32:36.549 00:32:36.549 true 00:32:36.549 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd10b138-4813-4fb4-8a05-c0c840673336 00:32:36.549 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:36.809 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:36.809 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:36.809 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2970588 00:32:37.381 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:37.381 Nvme0n1 : 3.00 17171.00 67.07 0.00 0.00 0.00 0.00 0.00 00:32:37.381 [2024-11-20T10:33:30.123Z] =================================================================================================================== 00:32:37.381 [2024-11-20T10:33:30.123Z] Total : 17171.00 67.07 0.00 0.00 0.00 0.00 0.00 00:32:37.381 00:32:38.765 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:38.765 Nvme0n1 : 4.00 17323.25 67.67 0.00 0.00 0.00 0.00 0.00 00:32:38.765 [2024-11-20T10:33:31.507Z] =================================================================================================================== 00:32:38.765 [2024-11-20T10:33:31.507Z] Total : 17323.25 67.67 0.00 0.00 0.00 0.00 0.00 00:32:38.765 00:32:39.335 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:39.335 Nvme0n1 : 5.00 18379.80 71.80 0.00 0.00 0.00 0.00 0.00 00:32:39.335 [2024-11-20T10:33:32.077Z] =================================================================================================================== 00:32:39.335 [2024-11-20T10:33:32.077Z] Total : 18379.80 71.80 0.00 0.00 0.00 0.00 0.00 00:32:39.335 00:32:40.720 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:40.720 Nvme0n1 : 6.00 19571.00 76.45 0.00 0.00 0.00 0.00 0.00 00:32:40.720 [2024-11-20T10:33:33.462Z] =================================================================================================================== 00:32:40.720 [2024-11-20T10:33:33.462Z] Total : 19571.00 76.45 0.00 0.00 0.00 0.00 0.00 00:32:40.720 00:32:41.661 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:41.661 Nvme0n1 : 7.00 20421.86 79.77 0.00 0.00 0.00 0.00 0.00 00:32:41.661 [2024-11-20T10:33:34.403Z] =================================================================================================================== 00:32:41.661 [2024-11-20T10:33:34.403Z] Total : 20421.86 79.77 0.00 0.00 0.00 0.00 0.00 00:32:41.661 00:32:42.600 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:42.600 Nvme0n1 : 8.00 21060.00 82.27 0.00 0.00 0.00 0.00 0.00 00:32:42.600 [2024-11-20T10:33:35.342Z] =================================================================================================================== 00:32:42.600 [2024-11-20T10:33:35.342Z] Total : 21060.00 82.27 0.00 0.00 0.00 0.00 0.00 00:32:42.600 00:32:43.541 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:43.541 Nvme0n1 : 9.00 21556.33 84.20 0.00 0.00 0.00 0.00 0.00 00:32:43.541 [2024-11-20T10:33:36.283Z] =================================================================================================================== 00:32:43.541 [2024-11-20T10:33:36.283Z] Total : 21556.33 84.20 0.00 0.00 0.00 0.00 0.00 00:32:43.541 00:32:44.481 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:44.481 Nvme0n1 : 10.00 21953.40 85.76 0.00 0.00 0.00 0.00 0.00 00:32:44.481 [2024-11-20T10:33:37.223Z] =================================================================================================================== 00:32:44.481 [2024-11-20T10:33:37.223Z] Total : 21953.40 85.76 0.00 0.00 0.00 0.00 0.00 00:32:44.481 00:32:44.481 00:32:44.481 Latency(us) 00:32:44.481 [2024-11-20T10:33:37.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:44.481 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:44.481 Nvme0n1 : 10.00 21955.99 85.77 0.00 0.00 5826.76 3713.71 28398.93 00:32:44.481 [2024-11-20T10:33:37.223Z] =================================================================================================================== 00:32:44.481 [2024-11-20T10:33:37.223Z] Total : 21955.99 85.77 0.00 0.00 5826.76 3713.71 28398.93 00:32:44.481 { 00:32:44.481 "results": [ 00:32:44.481 { 00:32:44.481 "job": "Nvme0n1", 00:32:44.481 "core_mask": "0x2", 00:32:44.481 "workload": "randwrite", 00:32:44.481 "status": "finished", 00:32:44.481 "queue_depth": 128, 00:32:44.481 "io_size": 4096, 00:32:44.481 "runtime": 10.004648, 00:32:44.481 "iops": 21955.994853592052, 00:32:44.481 "mibps": 85.76560489684395, 00:32:44.481 "io_failed": 0, 00:32:44.481 "io_timeout": 0, 00:32:44.481 "avg_latency_us": 5826.757687477428, 00:32:44.481 "min_latency_us": 3713.7066666666665, 00:32:44.481 "max_latency_us": 28398.933333333334 00:32:44.481 } 00:32:44.481 ], 00:32:44.481 "core_count": 1 00:32:44.481 } 00:32:44.481 11:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2970404 00:32:44.481 11:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2970404 ']' 00:32:44.481 11:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2970404 00:32:44.481 11:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:32:44.481 11:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:44.481 11:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2970404 00:32:44.481 11:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:44.481 11:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:44.481 11:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2970404' 00:32:44.481 killing process with pid 2970404 00:32:44.481 11:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2970404 00:32:44.481 Received shutdown signal, test time was about 10.000000 seconds 00:32:44.481 00:32:44.481 Latency(us) 00:32:44.481 [2024-11-20T10:33:37.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:44.481 [2024-11-20T10:33:37.223Z] =================================================================================================================== 00:32:44.481 [2024-11-20T10:33:37.223Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:44.481 11:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2970404 00:32:44.743 11:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:44.743 11:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:45.003 11:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd10b138-4813-4fb4-8a05-c0c840673336 00:32:45.003 11:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:45.264 11:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:45.264 11:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:32:45.264 11:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2966238 00:32:45.264 11:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2966238 00:32:45.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2966238 Killed "${NVMF_APP[@]}" "$@" 00:32:45.264 11:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:32:45.264 11:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:32:45.264 11:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:45.264 11:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:45.264 11:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:45.264 11:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2972700 00:32:45.264 11:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2972700 00:32:45.264 11:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:45.264 11:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2972700 ']' 00:32:45.264 11:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:45.264 11:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:45.264 11:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:45.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:45.264 11:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:45.264 11:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:45.264 [2024-11-20 11:33:37.932085] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:45.264 [2024-11-20 11:33:37.933144] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:32:45.264 [2024-11-20 11:33:37.933200] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:45.525 [2024-11-20 11:33:38.024809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:45.525 [2024-11-20 11:33:38.057384] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:45.525 [2024-11-20 11:33:38.057413] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:45.525 [2024-11-20 11:33:38.057420] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:45.525 [2024-11-20 11:33:38.057424] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:45.525 [2024-11-20 11:33:38.057429] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:45.525 [2024-11-20 11:33:38.057881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:45.525 [2024-11-20 11:33:38.109384] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:45.525 [2024-11-20 11:33:38.109572] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:46.097 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:46.097 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:32:46.097 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:46.097 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:46.097 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:46.097 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:46.097 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:46.358 [2024-11-20 11:33:38.948398] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:32:46.358 [2024-11-20 11:33:38.948637] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:32:46.358 [2024-11-20 11:33:38.948727] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:32:46.358 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:32:46.358 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b10b9e41-a12e-4416-bd28-2f21db13d464 00:32:46.358 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b10b9e41-a12e-4416-bd28-2f21db13d464 00:32:46.358 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:46.358 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:32:46.358 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:46.358 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:46.358 11:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:46.626 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b10b9e41-a12e-4416-bd28-2f21db13d464 -t 2000 00:32:46.626 [ 00:32:46.626 { 00:32:46.626 "name": "b10b9e41-a12e-4416-bd28-2f21db13d464", 00:32:46.626 "aliases": [ 00:32:46.626 "lvs/lvol" 00:32:46.626 ], 00:32:46.626 "product_name": "Logical Volume", 00:32:46.626 "block_size": 4096, 00:32:46.626 "num_blocks": 38912, 00:32:46.626 "uuid": "b10b9e41-a12e-4416-bd28-2f21db13d464", 00:32:46.626 "assigned_rate_limits": { 00:32:46.626 "rw_ios_per_sec": 0, 00:32:46.626 "rw_mbytes_per_sec": 0, 00:32:46.626 "r_mbytes_per_sec": 0, 00:32:46.626 "w_mbytes_per_sec": 0 00:32:46.626 }, 00:32:46.626 "claimed": false, 00:32:46.626 "zoned": false, 00:32:46.626 "supported_io_types": { 00:32:46.626 "read": true, 00:32:46.626 "write": true, 00:32:46.626 "unmap": true, 00:32:46.626 "flush": false, 00:32:46.626 "reset": true, 00:32:46.626 "nvme_admin": false, 00:32:46.626 "nvme_io": false, 00:32:46.626 "nvme_io_md": false, 00:32:46.626 "write_zeroes": true, 00:32:46.626 "zcopy": false, 00:32:46.626 "get_zone_info": false, 00:32:46.626 "zone_management": false, 00:32:46.626 "zone_append": false, 00:32:46.626 "compare": false, 00:32:46.626 "compare_and_write": false, 00:32:46.626 "abort": false, 00:32:46.627 "seek_hole": true, 00:32:46.627 "seek_data": true, 00:32:46.627 "copy": false, 00:32:46.627 "nvme_iov_md": false 00:32:46.627 }, 00:32:46.627 "driver_specific": { 00:32:46.627 "lvol": { 00:32:46.627 "lvol_store_uuid": "dd10b138-4813-4fb4-8a05-c0c840673336", 00:32:46.627 "base_bdev": "aio_bdev", 00:32:46.627 "thin_provision": false, 00:32:46.627 "num_allocated_clusters": 38, 00:32:46.627 "snapshot": false, 00:32:46.627 "clone": false, 00:32:46.627 "esnap_clone": false 00:32:46.627 } 00:32:46.627 } 00:32:46.627 } 00:32:46.627 ] 00:32:46.627 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:32:46.627 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd10b138-4813-4fb4-8a05-c0c840673336 00:32:46.627 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:32:46.890 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:32:46.890 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd10b138-4813-4fb4-8a05-c0c840673336 00:32:46.890 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:32:47.150 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:32:47.150 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:47.150 [2024-11-20 11:33:39.886424] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:47.410 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd10b138-4813-4fb4-8a05-c0c840673336 00:32:47.410 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:32:47.410 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd10b138-4813-4fb4-8a05-c0c840673336 00:32:47.410 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:47.410 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:47.410 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:47.410 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:47.410 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:47.410 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:47.410 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:47.410 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:47.410 11:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd10b138-4813-4fb4-8a05-c0c840673336 00:32:47.410 request: 00:32:47.410 { 00:32:47.410 "uuid": "dd10b138-4813-4fb4-8a05-c0c840673336", 00:32:47.410 "method": "bdev_lvol_get_lvstores", 00:32:47.410 "req_id": 1 00:32:47.410 } 00:32:47.410 Got JSON-RPC error response 00:32:47.410 response: 00:32:47.410 { 00:32:47.410 "code": -19, 00:32:47.410 "message": "No such device" 00:32:47.410 } 00:32:47.410 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:32:47.410 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:47.410 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:47.410 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:47.410 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:47.670 aio_bdev 00:32:47.670 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b10b9e41-a12e-4416-bd28-2f21db13d464 00:32:47.670 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b10b9e41-a12e-4416-bd28-2f21db13d464 00:32:47.670 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:47.670 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:32:47.670 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:47.670 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:47.670 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:47.930 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b10b9e41-a12e-4416-bd28-2f21db13d464 -t 2000 00:32:47.930 [ 00:32:47.930 { 00:32:47.930 "name": "b10b9e41-a12e-4416-bd28-2f21db13d464", 00:32:47.930 "aliases": [ 00:32:47.930 "lvs/lvol" 00:32:47.930 ], 00:32:47.930 "product_name": "Logical Volume", 00:32:47.930 "block_size": 4096, 00:32:47.930 "num_blocks": 38912, 00:32:47.930 "uuid": "b10b9e41-a12e-4416-bd28-2f21db13d464", 00:32:47.930 "assigned_rate_limits": { 00:32:47.930 "rw_ios_per_sec": 0, 00:32:47.930 "rw_mbytes_per_sec": 0, 00:32:47.930 "r_mbytes_per_sec": 0, 00:32:47.930 "w_mbytes_per_sec": 0 00:32:47.930 }, 00:32:47.930 "claimed": false, 00:32:47.930 "zoned": false, 00:32:47.930 "supported_io_types": { 00:32:47.930 "read": true, 00:32:47.930 "write": true, 00:32:47.930 "unmap": true, 00:32:47.931 "flush": false, 00:32:47.931 "reset": true, 00:32:47.931 "nvme_admin": false, 00:32:47.931 "nvme_io": false, 00:32:47.931 "nvme_io_md": false, 00:32:47.931 "write_zeroes": true, 00:32:47.931 "zcopy": false, 00:32:47.931 "get_zone_info": false, 00:32:47.931 "zone_management": false, 00:32:47.931 "zone_append": false, 00:32:47.931 "compare": false, 00:32:47.931 "compare_and_write": false, 00:32:47.931 "abort": false, 00:32:47.931 "seek_hole": true, 00:32:47.931 "seek_data": true, 00:32:47.931 "copy": false, 00:32:47.931 "nvme_iov_md": false 00:32:47.931 }, 00:32:47.931 "driver_specific": { 00:32:47.931 "lvol": { 00:32:47.931 "lvol_store_uuid": "dd10b138-4813-4fb4-8a05-c0c840673336", 00:32:47.931 "base_bdev": "aio_bdev", 00:32:47.931 "thin_provision": false, 00:32:47.931 "num_allocated_clusters": 38, 00:32:47.931 "snapshot": false, 00:32:47.931 "clone": false, 00:32:47.931 "esnap_clone": false 00:32:47.931 } 00:32:47.931 } 00:32:47.931 } 00:32:47.931 ] 00:32:47.931 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:32:47.931 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd10b138-4813-4fb4-8a05-c0c840673336 00:32:47.931 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:48.191 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:48.191 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd10b138-4813-4fb4-8a05-c0c840673336 00:32:48.191 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:48.451 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:48.451 11:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b10b9e41-a12e-4416-bd28-2f21db13d464 00:32:48.451 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dd10b138-4813-4fb4-8a05-c0c840673336 00:32:48.712 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:48.973 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:48.973 00:32:48.973 real 0m17.907s 00:32:48.973 user 0m35.625s 00:32:48.973 sys 0m3.320s 00:32:48.973 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:48.973 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:48.973 ************************************ 00:32:48.973 END TEST lvs_grow_dirty 00:32:48.973 ************************************ 00:32:48.973 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:32:48.973 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:32:48.973 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:32:48.973 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:32:48.973 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:48.973 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:32:48.973 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:32:48.973 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:32:48.973 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:48.973 nvmf_trace.0 00:32:48.973 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:32:48.973 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:32:48.973 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:48.973 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:32:48.973 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:48.973 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:32:48.973 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:48.973 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:48.973 rmmod nvme_tcp 00:32:48.973 rmmod nvme_fabrics 00:32:48.973 rmmod nvme_keyring 00:32:48.973 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:48.973 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:32:48.973 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:32:48.973 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2972700 ']' 00:32:48.973 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2972700 00:32:48.973 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2972700 ']' 00:32:48.973 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2972700 00:32:48.973 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:32:49.233 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:49.233 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2972700 00:32:49.233 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:49.234 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:49.234 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2972700' 00:32:49.234 killing process with pid 2972700 00:32:49.234 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2972700 00:32:49.234 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2972700 00:32:49.234 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:49.234 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:49.234 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:49.234 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:32:49.234 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:32:49.234 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:49.234 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:32:49.234 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:49.234 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:49.234 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:49.234 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:49.234 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:51.778 00:32:51.778 real 0m45.440s 00:32:51.778 user 0m54.176s 00:32:51.778 sys 0m11.137s 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:51.778 ************************************ 00:32:51.778 END TEST nvmf_lvs_grow 00:32:51.778 ************************************ 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:51.778 ************************************ 00:32:51.778 START TEST nvmf_bdev_io_wait 00:32:51.778 ************************************ 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:51.778 * Looking for test storage... 00:32:51.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:51.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.778 --rc genhtml_branch_coverage=1 00:32:51.778 --rc genhtml_function_coverage=1 00:32:51.778 --rc genhtml_legend=1 00:32:51.778 --rc geninfo_all_blocks=1 00:32:51.778 --rc geninfo_unexecuted_blocks=1 00:32:51.778 00:32:51.778 ' 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:51.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.778 --rc genhtml_branch_coverage=1 00:32:51.778 --rc genhtml_function_coverage=1 00:32:51.778 --rc genhtml_legend=1 00:32:51.778 --rc geninfo_all_blocks=1 00:32:51.778 --rc geninfo_unexecuted_blocks=1 00:32:51.778 00:32:51.778 ' 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:51.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.778 --rc genhtml_branch_coverage=1 00:32:51.778 --rc genhtml_function_coverage=1 00:32:51.778 --rc genhtml_legend=1 00:32:51.778 --rc geninfo_all_blocks=1 00:32:51.778 --rc geninfo_unexecuted_blocks=1 00:32:51.778 00:32:51.778 ' 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:51.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.778 --rc genhtml_branch_coverage=1 00:32:51.778 --rc genhtml_function_coverage=1 00:32:51.778 --rc genhtml_legend=1 00:32:51.778 --rc geninfo_all_blocks=1 00:32:51.778 --rc geninfo_unexecuted_blocks=1 00:32:51.778 00:32:51.778 ' 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:51.778 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:51.779 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:32:51.779 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:51.779 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:51.779 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:51.779 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.779 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.779 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.779 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:32:51.779 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.779 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:32:51.779 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:51.779 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:51.779 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:51.779 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:51.779 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:51.779 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:51.779 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:51.779 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:51.779 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:51.779 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:51.779 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:51.779 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:51.779 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:32:51.779 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:51.779 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:51.779 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:51.779 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:51.779 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:51.779 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:51.779 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:51.779 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:51.779 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:51.779 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:51.779 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:32:51.779 11:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:59.918 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:59.918 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:59.918 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:59.919 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:59.919 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:59.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:59.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:32:59.919 00:32:59.919 --- 10.0.0.2 ping statistics --- 00:32:59.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:59.919 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:59.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:59.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:32:59.919 00:32:59.919 --- 10.0.0.1 ping statistics --- 00:32:59.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:59.919 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2977592 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2977592 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2977592 ']' 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:59.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:59.919 11:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:59.919 [2024-11-20 11:33:51.915358] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:59.919 [2024-11-20 11:33:51.916500] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:32:59.919 [2024-11-20 11:33:51.916552] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:59.919 [2024-11-20 11:33:52.016566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:59.919 [2024-11-20 11:33:52.071022] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:59.919 [2024-11-20 11:33:52.071072] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:59.919 [2024-11-20 11:33:52.071081] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:59.919 [2024-11-20 11:33:52.071088] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:59.919 [2024-11-20 11:33:52.071094] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:59.919 [2024-11-20 11:33:52.073115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:59.919 [2024-11-20 11:33:52.073276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:59.919 [2024-11-20 11:33:52.073566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:59.919 [2024-11-20 11:33:52.073569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:59.919 [2024-11-20 11:33:52.074073] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:00.181 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:00.181 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:33:00.181 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:00.181 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:00.181 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:00.181 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:00.181 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:33:00.181 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.181 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:00.181 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.181 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:33:00.181 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.181 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:00.181 [2024-11-20 11:33:52.850906] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:00.181 [2024-11-20 11:33:52.851590] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:00.181 [2024-11-20 11:33:52.851675] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:00.181 [2024-11-20 11:33:52.851873] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:00.181 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.181 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:00.181 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.181 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:00.181 [2024-11-20 11:33:52.862616] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:00.181 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.181 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:00.181 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.181 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:00.181 Malloc0 00:33:00.181 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.181 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:00.181 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.181 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:00.181 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.443 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:00.443 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.443 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:00.443 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.443 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:00.443 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.443 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:00.443 [2024-11-20 11:33:52.938867] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:00.443 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.443 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2977940 00:33:00.443 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2977942 00:33:00.443 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:33:00.443 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:33:00.443 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:00.443 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:00.443 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:00.443 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:00.443 { 00:33:00.443 "params": { 00:33:00.443 "name": "Nvme$subsystem", 00:33:00.443 "trtype": "$TEST_TRANSPORT", 00:33:00.443 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:00.443 "adrfam": "ipv4", 00:33:00.443 "trsvcid": "$NVMF_PORT", 00:33:00.443 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:00.443 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:00.443 "hdgst": ${hdgst:-false}, 00:33:00.443 "ddgst": ${ddgst:-false} 00:33:00.443 }, 00:33:00.443 "method": "bdev_nvme_attach_controller" 00:33:00.443 } 00:33:00.443 EOF 00:33:00.443 )") 00:33:00.443 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2977944 00:33:00.443 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:33:00.443 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:33:00.443 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:00.443 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:00.443 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:00.443 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:00.443 { 00:33:00.443 "params": { 00:33:00.443 "name": "Nvme$subsystem", 00:33:00.443 "trtype": "$TEST_TRANSPORT", 00:33:00.443 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:00.443 "adrfam": "ipv4", 00:33:00.443 "trsvcid": "$NVMF_PORT", 00:33:00.443 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:00.443 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:00.443 "hdgst": ${hdgst:-false}, 00:33:00.443 "ddgst": ${ddgst:-false} 00:33:00.443 }, 00:33:00.443 "method": "bdev_nvme_attach_controller" 00:33:00.443 } 00:33:00.443 EOF 00:33:00.443 )") 00:33:00.443 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2977947 00:33:00.443 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:33:00.444 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:33:00.444 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:33:00.444 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:00.444 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:00.444 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:00.444 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:00.444 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:00.444 { 00:33:00.444 "params": { 00:33:00.444 "name": "Nvme$subsystem", 00:33:00.444 "trtype": "$TEST_TRANSPORT", 00:33:00.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:00.444 "adrfam": "ipv4", 00:33:00.444 "trsvcid": "$NVMF_PORT", 00:33:00.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:00.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:00.444 "hdgst": ${hdgst:-false}, 00:33:00.444 "ddgst": ${ddgst:-false} 00:33:00.444 }, 00:33:00.444 "method": "bdev_nvme_attach_controller" 00:33:00.444 } 00:33:00.444 EOF 00:33:00.444 )") 00:33:00.444 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:33:00.444 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:33:00.444 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:00.444 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:00.444 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:00.444 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:00.444 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:00.444 { 00:33:00.444 "params": { 00:33:00.444 "name": "Nvme$subsystem", 00:33:00.444 "trtype": "$TEST_TRANSPORT", 00:33:00.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:00.444 "adrfam": "ipv4", 00:33:00.444 "trsvcid": "$NVMF_PORT", 00:33:00.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:00.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:00.444 "hdgst": ${hdgst:-false}, 00:33:00.444 "ddgst": ${ddgst:-false} 00:33:00.444 }, 00:33:00.444 "method": "bdev_nvme_attach_controller" 00:33:00.444 } 00:33:00.444 EOF 00:33:00.444 )") 00:33:00.444 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:00.444 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2977940 00:33:00.444 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:00.444 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:00.444 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:00.444 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:00.444 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:00.444 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:00.444 "params": { 00:33:00.444 "name": "Nvme1", 00:33:00.444 "trtype": "tcp", 00:33:00.444 "traddr": "10.0.0.2", 00:33:00.444 "adrfam": "ipv4", 00:33:00.444 "trsvcid": "4420", 00:33:00.444 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:00.444 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:00.444 "hdgst": false, 00:33:00.444 "ddgst": false 00:33:00.444 }, 00:33:00.444 "method": "bdev_nvme_attach_controller" 00:33:00.444 }' 00:33:00.444 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:00.444 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:00.444 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:00.444 "params": { 00:33:00.444 "name": "Nvme1", 00:33:00.444 "trtype": "tcp", 00:33:00.444 "traddr": "10.0.0.2", 00:33:00.444 "adrfam": "ipv4", 00:33:00.444 "trsvcid": "4420", 00:33:00.444 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:00.444 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:00.444 "hdgst": false, 00:33:00.444 "ddgst": false 00:33:00.444 }, 00:33:00.444 "method": "bdev_nvme_attach_controller" 00:33:00.444 }' 00:33:00.444 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:00.444 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:00.444 "params": { 00:33:00.444 "name": "Nvme1", 00:33:00.444 "trtype": "tcp", 00:33:00.444 "traddr": "10.0.0.2", 00:33:00.444 "adrfam": "ipv4", 00:33:00.444 "trsvcid": "4420", 00:33:00.444 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:00.444 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:00.444 "hdgst": false, 00:33:00.444 "ddgst": false 00:33:00.444 }, 00:33:00.444 "method": "bdev_nvme_attach_controller" 00:33:00.444 }' 00:33:00.444 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:00.444 11:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:00.444 "params": { 00:33:00.444 "name": "Nvme1", 00:33:00.444 "trtype": "tcp", 00:33:00.444 "traddr": "10.0.0.2", 00:33:00.444 "adrfam": "ipv4", 00:33:00.444 "trsvcid": "4420", 00:33:00.444 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:00.444 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:00.444 "hdgst": false, 00:33:00.444 "ddgst": false 00:33:00.444 }, 00:33:00.444 "method": "bdev_nvme_attach_controller" 00:33:00.444 }' 00:33:00.444 [2024-11-20 11:33:52.997410] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:33:00.444 [2024-11-20 11:33:52.997483] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:33:00.444 [2024-11-20 11:33:52.997518] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:33:00.444 [2024-11-20 11:33:52.997582] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:33:00.444 [2024-11-20 11:33:52.998309] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:33:00.444 [2024-11-20 11:33:52.998364] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:33:00.444 [2024-11-20 11:33:53.003226] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:33:00.444 [2024-11-20 11:33:53.003300] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:33:00.706 [2024-11-20 11:33:53.220165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.706 [2024-11-20 11:33:53.258859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:00.706 [2024-11-20 11:33:53.310243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.706 [2024-11-20 11:33:53.350036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:00.706 [2024-11-20 11:33:53.403944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.706 [2024-11-20 11:33:53.441988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:00.967 [2024-11-20 11:33:53.465814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.967 [2024-11-20 11:33:53.501841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:00.967 Running I/O for 1 seconds... 00:33:00.967 Running I/O for 1 seconds... 00:33:00.967 Running I/O for 1 seconds... 00:33:01.227 Running I/O for 1 seconds... 00:33:02.171 7913.00 IOPS, 30.91 MiB/s 00:33:02.171 Latency(us) 00:33:02.171 [2024-11-20T10:33:54.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:02.171 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:33:02.172 Nvme1n1 : 1.02 7924.46 30.95 0.00 0.00 16028.69 4642.13 24357.55 00:33:02.172 [2024-11-20T10:33:54.914Z] =================================================================================================================== 00:33:02.172 [2024-11-20T10:33:54.914Z] Total : 7924.46 30.95 0.00 0.00 16028.69 4642.13 24357.55 00:33:02.172 182424.00 IOPS, 712.59 MiB/s 00:33:02.172 Latency(us) 00:33:02.172 [2024-11-20T10:33:54.914Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:02.172 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:33:02.172 Nvme1n1 : 1.00 182056.24 711.16 0.00 0.00 699.13 300.37 2020.69 00:33:02.172 [2024-11-20T10:33:54.914Z] =================================================================================================================== 00:33:02.172 [2024-11-20T10:33:54.914Z] Total : 182056.24 711.16 0.00 0.00 699.13 300.37 2020.69 00:33:02.172 7179.00 IOPS, 28.04 MiB/s 00:33:02.172 Latency(us) 00:33:02.172 [2024-11-20T10:33:54.914Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:02.172 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:33:02.172 Nvme1n1 : 1.01 7258.04 28.35 0.00 0.00 17575.14 5215.57 26869.76 00:33:02.172 [2024-11-20T10:33:54.914Z] =================================================================================================================== 00:33:02.172 [2024-11-20T10:33:54.914Z] Total : 7258.04 28.35 0.00 0.00 17575.14 5215.57 26869.76 00:33:02.172 11590.00 IOPS, 45.27 MiB/s 00:33:02.172 Latency(us) 00:33:02.172 [2024-11-20T10:33:54.914Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:02.172 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:33:02.172 Nvme1n1 : 1.01 11661.50 45.55 0.00 0.00 10939.05 2143.57 17148.59 00:33:02.172 [2024-11-20T10:33:54.914Z] =================================================================================================================== 00:33:02.172 [2024-11-20T10:33:54.914Z] Total : 11661.50 45.55 0.00 0.00 10939.05 2143.57 17148.59 00:33:02.172 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2977942 00:33:02.172 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2977944 00:33:02.172 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2977947 00:33:02.172 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:02.172 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.172 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:02.172 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.172 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:33:02.172 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:33:02.172 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:02.172 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:33:02.172 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:02.172 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:33:02.172 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:02.172 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:02.172 rmmod nvme_tcp 00:33:02.172 rmmod nvme_fabrics 00:33:02.172 rmmod nvme_keyring 00:33:02.434 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:02.434 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:33:02.434 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:33:02.434 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2977592 ']' 00:33:02.434 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2977592 00:33:02.434 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2977592 ']' 00:33:02.434 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2977592 00:33:02.434 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:33:02.434 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:02.434 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2977592 00:33:02.434 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:02.434 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:02.434 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2977592' 00:33:02.434 killing process with pid 2977592 00:33:02.434 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2977592 00:33:02.434 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2977592 00:33:02.434 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:02.434 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:02.434 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:02.434 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:33:02.434 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:33:02.434 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:02.434 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:33:02.434 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:02.434 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:02.434 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:02.434 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:02.434 11:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:04.979 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:04.979 00:33:04.979 real 0m13.134s 00:33:04.979 user 0m16.087s 00:33:04.979 sys 0m7.648s 00:33:04.979 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:04.979 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:04.979 ************************************ 00:33:04.979 END TEST nvmf_bdev_io_wait 00:33:04.979 ************************************ 00:33:04.979 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:04.979 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:04.979 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:04.979 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:04.979 ************************************ 00:33:04.979 START TEST nvmf_queue_depth 00:33:04.979 ************************************ 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:04.980 * Looking for test storage... 00:33:04.980 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:04.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.980 --rc genhtml_branch_coverage=1 00:33:04.980 --rc genhtml_function_coverage=1 00:33:04.980 --rc genhtml_legend=1 00:33:04.980 --rc geninfo_all_blocks=1 00:33:04.980 --rc geninfo_unexecuted_blocks=1 00:33:04.980 00:33:04.980 ' 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:04.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.980 --rc genhtml_branch_coverage=1 00:33:04.980 --rc genhtml_function_coverage=1 00:33:04.980 --rc genhtml_legend=1 00:33:04.980 --rc geninfo_all_blocks=1 00:33:04.980 --rc geninfo_unexecuted_blocks=1 00:33:04.980 00:33:04.980 ' 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:04.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.980 --rc genhtml_branch_coverage=1 00:33:04.980 --rc genhtml_function_coverage=1 00:33:04.980 --rc genhtml_legend=1 00:33:04.980 --rc geninfo_all_blocks=1 00:33:04.980 --rc geninfo_unexecuted_blocks=1 00:33:04.980 00:33:04.980 ' 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:04.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.980 --rc genhtml_branch_coverage=1 00:33:04.980 --rc genhtml_function_coverage=1 00:33:04.980 --rc genhtml_legend=1 00:33:04.980 --rc geninfo_all_blocks=1 00:33:04.980 --rc geninfo_unexecuted_blocks=1 00:33:04.980 00:33:04.980 ' 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:04.980 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:04.981 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.981 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.981 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.981 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:33:04.981 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.981 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:33:04.981 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:04.981 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:04.981 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:04.981 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:04.981 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:04.981 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:04.981 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:04.981 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:04.981 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:04.981 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:04.981 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:33:04.981 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:33:04.981 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:04.981 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:33:04.981 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:04.981 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:04.981 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:04.981 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:04.981 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:04.981 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:04.981 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:04.981 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:04.981 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:04.981 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:04.981 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:33:04.981 11:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:13.122 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:13.122 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:13.122 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:13.123 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:13.123 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:13.123 11:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:13.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:13.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.469 ms 00:33:13.123 00:33:13.123 --- 10.0.0.2 ping statistics --- 00:33:13.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:13.123 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:33:13.123 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:13.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:13.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:33:13.123 00:33:13.123 --- 10.0.0.1 ping statistics --- 00:33:13.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:13.123 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:33:13.123 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:13.123 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:33:13.123 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:13.123 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:13.123 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:13.123 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:13.123 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:13.123 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:13.123 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:13.123 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:33:13.123 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:13.123 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:13.123 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:13.123 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2982357 00:33:13.123 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2982357 00:33:13.123 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:13.123 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2982357 ']' 00:33:13.123 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:13.123 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:13.123 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:13.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:13.123 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:13.123 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:13.123 [2024-11-20 11:34:05.127465] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:13.123 [2024-11-20 11:34:05.128579] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:33:13.123 [2024-11-20 11:34:05.128633] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:13.123 [2024-11-20 11:34:05.230896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:13.123 [2024-11-20 11:34:05.281776] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:13.123 [2024-11-20 11:34:05.281826] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:13.123 [2024-11-20 11:34:05.281835] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:13.123 [2024-11-20 11:34:05.281842] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:13.123 [2024-11-20 11:34:05.281848] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:13.123 [2024-11-20 11:34:05.282602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:13.123 [2024-11-20 11:34:05.359351] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:13.123 [2024-11-20 11:34:05.359651] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:13.384 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:13.384 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:13.384 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:13.384 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:13.384 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:13.384 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:13.384 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:13.384 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.384 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:13.384 [2024-11-20 11:34:06.007475] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:13.384 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.384 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:13.384 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.384 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:13.384 Malloc0 00:33:13.384 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.384 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:13.384 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.384 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:13.384 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.384 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:13.384 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.384 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:13.384 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.384 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:13.384 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.384 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:13.384 [2024-11-20 11:34:06.087497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:13.384 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.384 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2982659 00:33:13.384 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:13.384 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:33:13.384 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2982659 /var/tmp/bdevperf.sock 00:33:13.384 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2982659 ']' 00:33:13.384 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:13.384 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:13.384 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:13.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:13.384 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:13.384 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:13.645 [2024-11-20 11:34:06.144046] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:33:13.645 [2024-11-20 11:34:06.144116] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2982659 ] 00:33:13.645 [2024-11-20 11:34:06.236883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:13.645 [2024-11-20 11:34:06.289578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:14.586 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:14.586 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:14.586 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:14.586 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.586 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:14.586 NVMe0n1 00:33:14.586 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.586 11:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:14.586 Running I/O for 10 seconds... 00:33:16.470 8206.00 IOPS, 32.05 MiB/s [2024-11-20T10:34:10.286Z] 8705.50 IOPS, 34.01 MiB/s [2024-11-20T10:34:11.229Z] 8883.67 IOPS, 34.70 MiB/s [2024-11-20T10:34:12.615Z] 9989.00 IOPS, 39.02 MiB/s [2024-11-20T10:34:13.556Z] 10700.60 IOPS, 41.80 MiB/s [2024-11-20T10:34:14.498Z] 11128.83 IOPS, 43.47 MiB/s [2024-11-20T10:34:15.440Z] 11487.57 IOPS, 44.87 MiB/s [2024-11-20T10:34:16.382Z] 11775.00 IOPS, 46.00 MiB/s [2024-11-20T10:34:17.325Z] 11961.78 IOPS, 46.73 MiB/s [2024-11-20T10:34:17.325Z] 12173.00 IOPS, 47.55 MiB/s 00:33:24.583 Latency(us) 00:33:24.583 [2024-11-20T10:34:17.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.583 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:33:24.583 Verification LBA range: start 0x0 length 0x4000 00:33:24.583 NVMe0n1 : 10.06 12192.59 47.63 0.00 0.00 83667.40 23265.28 75584.85 00:33:24.583 [2024-11-20T10:34:17.325Z] =================================================================================================================== 00:33:24.583 [2024-11-20T10:34:17.325Z] Total : 12192.59 47.63 0.00 0.00 83667.40 23265.28 75584.85 00:33:24.583 { 00:33:24.583 "results": [ 00:33:24.583 { 00:33:24.583 "job": "NVMe0n1", 00:33:24.583 "core_mask": "0x1", 00:33:24.583 "workload": "verify", 00:33:24.583 "status": "finished", 00:33:24.583 "verify_range": { 00:33:24.583 "start": 0, 00:33:24.583 "length": 16384 00:33:24.583 }, 00:33:24.583 "queue_depth": 1024, 00:33:24.583 "io_size": 4096, 00:33:24.583 "runtime": 10.057663, 00:33:24.583 "iops": 12192.593846105203, 00:33:24.583 "mibps": 47.62731971134845, 00:33:24.583 "io_failed": 0, 00:33:24.583 "io_timeout": 0, 00:33:24.583 "avg_latency_us": 83667.40443494878, 00:33:24.583 "min_latency_us": 23265.28, 00:33:24.583 "max_latency_us": 75584.85333333333 00:33:24.583 } 00:33:24.583 ], 00:33:24.583 "core_count": 1 00:33:24.583 } 00:33:24.583 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2982659 00:33:24.583 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2982659 ']' 00:33:24.583 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2982659 00:33:24.583 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:24.583 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:24.583 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2982659 00:33:24.847 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:24.847 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:24.847 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2982659' 00:33:24.847 killing process with pid 2982659 00:33:24.847 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2982659 00:33:24.847 Received shutdown signal, test time was about 10.000000 seconds 00:33:24.847 00:33:24.847 Latency(us) 00:33:24.847 [2024-11-20T10:34:17.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.847 [2024-11-20T10:34:17.589Z] =================================================================================================================== 00:33:24.847 [2024-11-20T10:34:17.589Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:24.847 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2982659 00:33:24.847 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:33:24.847 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:33:24.847 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:24.847 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:33:24.847 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:24.847 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:33:24.847 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:24.847 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:24.847 rmmod nvme_tcp 00:33:24.847 rmmod nvme_fabrics 00:33:24.847 rmmod nvme_keyring 00:33:24.847 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:24.847 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:33:24.847 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:33:24.847 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2982357 ']' 00:33:24.847 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2982357 00:33:24.847 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2982357 ']' 00:33:24.847 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2982357 00:33:24.847 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:24.847 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:24.847 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2982357 00:33:24.847 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:24.847 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:24.847 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2982357' 00:33:24.847 killing process with pid 2982357 00:33:24.847 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2982357 00:33:24.847 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2982357 00:33:25.112 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:25.112 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:25.112 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:25.112 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:33:25.112 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:33:25.112 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:25.112 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:33:25.112 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:25.112 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:25.112 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:25.112 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:25.112 11:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.660 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:27.660 00:33:27.660 real 0m22.457s 00:33:27.660 user 0m24.565s 00:33:27.660 sys 0m7.502s 00:33:27.660 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:27.660 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:27.660 ************************************ 00:33:27.660 END TEST nvmf_queue_depth 00:33:27.660 ************************************ 00:33:27.660 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:27.660 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:27.660 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:27.660 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:27.660 ************************************ 00:33:27.660 START TEST nvmf_target_multipath 00:33:27.660 ************************************ 00:33:27.660 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:27.660 * Looking for test storage... 00:33:27.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:27.660 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:27.660 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:27.660 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:33:27.660 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:27.660 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:27.660 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:27.660 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:27.660 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:33:27.660 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:33:27.660 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:33:27.660 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:33:27.660 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:33:27.660 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:33:27.660 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:33:27.660 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:27.660 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:33:27.660 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:33:27.660 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:27.660 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:27.660 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:33:27.660 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:33:27.660 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:27.660 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:27.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.661 --rc genhtml_branch_coverage=1 00:33:27.661 --rc genhtml_function_coverage=1 00:33:27.661 --rc genhtml_legend=1 00:33:27.661 --rc geninfo_all_blocks=1 00:33:27.661 --rc geninfo_unexecuted_blocks=1 00:33:27.661 00:33:27.661 ' 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:27.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.661 --rc genhtml_branch_coverage=1 00:33:27.661 --rc genhtml_function_coverage=1 00:33:27.661 --rc genhtml_legend=1 00:33:27.661 --rc geninfo_all_blocks=1 00:33:27.661 --rc geninfo_unexecuted_blocks=1 00:33:27.661 00:33:27.661 ' 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:27.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.661 --rc genhtml_branch_coverage=1 00:33:27.661 --rc genhtml_function_coverage=1 00:33:27.661 --rc genhtml_legend=1 00:33:27.661 --rc geninfo_all_blocks=1 00:33:27.661 --rc geninfo_unexecuted_blocks=1 00:33:27.661 00:33:27.661 ' 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:27.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.661 --rc genhtml_branch_coverage=1 00:33:27.661 --rc genhtml_function_coverage=1 00:33:27.661 --rc genhtml_legend=1 00:33:27.661 --rc geninfo_all_blocks=1 00:33:27.661 --rc geninfo_unexecuted_blocks=1 00:33:27.661 00:33:27.661 ' 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.661 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:27.662 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:27.662 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:33:27.662 11:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:35.802 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:35.802 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:35.802 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:35.802 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:35.803 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:35.803 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:35.803 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.579 ms 00:33:35.803 00:33:35.803 --- 10.0.0.2 ping statistics --- 00:33:35.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:35.803 rtt min/avg/max/mdev = 0.579/0.579/0.579/0.000 ms 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:35.803 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:35.803 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:33:35.803 00:33:35.803 --- 10.0.0.1 ping statistics --- 00:33:35.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:35.803 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:33:35.803 only one NIC for nvmf test 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:35.803 rmmod nvme_tcp 00:33:35.803 rmmod nvme_fabrics 00:33:35.803 rmmod nvme_keyring 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:35.803 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:37.185 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:37.185 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:33:37.185 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:33:37.186 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:37.186 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:37.186 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:37.186 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:37.186 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:37.186 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:37.186 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:37.186 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:37.186 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:37.186 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:37.186 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:37.186 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:37.186 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:37.186 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:37.186 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:37.186 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:37.186 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:37.186 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:37.186 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:37.186 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:37.186 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:37.186 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:37.186 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:37.186 00:33:37.186 real 0m9.878s 00:33:37.186 user 0m2.221s 00:33:37.186 sys 0m5.610s 00:33:37.186 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:37.186 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:37.186 ************************************ 00:33:37.186 END TEST nvmf_target_multipath 00:33:37.186 ************************************ 00:33:37.186 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:37.186 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:37.186 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:37.186 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:37.186 ************************************ 00:33:37.186 START TEST nvmf_zcopy 00:33:37.186 ************************************ 00:33:37.186 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:37.186 * Looking for test storage... 00:33:37.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:37.186 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:37.186 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:33:37.186 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:37.446 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:37.446 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:37.446 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:37.446 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:37.446 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:33:37.446 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:33:37.446 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:33:37.446 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:33:37.446 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:33:37.446 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:33:37.446 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:33:37.446 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:37.446 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:33:37.446 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:33:37.446 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:37.446 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:37.446 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:33:37.446 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:33:37.446 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:37.446 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:37.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:37.446 --rc genhtml_branch_coverage=1 00:33:37.446 --rc genhtml_function_coverage=1 00:33:37.446 --rc genhtml_legend=1 00:33:37.446 --rc geninfo_all_blocks=1 00:33:37.446 --rc geninfo_unexecuted_blocks=1 00:33:37.446 00:33:37.446 ' 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:37.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:37.446 --rc genhtml_branch_coverage=1 00:33:37.446 --rc genhtml_function_coverage=1 00:33:37.446 --rc genhtml_legend=1 00:33:37.446 --rc geninfo_all_blocks=1 00:33:37.446 --rc geninfo_unexecuted_blocks=1 00:33:37.446 00:33:37.446 ' 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:37.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:37.446 --rc genhtml_branch_coverage=1 00:33:37.446 --rc genhtml_function_coverage=1 00:33:37.446 --rc genhtml_legend=1 00:33:37.446 --rc geninfo_all_blocks=1 00:33:37.446 --rc geninfo_unexecuted_blocks=1 00:33:37.446 00:33:37.446 ' 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:37.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:37.446 --rc genhtml_branch_coverage=1 00:33:37.446 --rc genhtml_function_coverage=1 00:33:37.446 --rc genhtml_legend=1 00:33:37.446 --rc geninfo_all_blocks=1 00:33:37.446 --rc geninfo_unexecuted_blocks=1 00:33:37.446 00:33:37.446 ' 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:37.446 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:37.447 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.447 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.447 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.447 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:33:37.447 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.447 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:33:37.447 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:37.447 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:37.447 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:37.447 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:37.447 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:37.447 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:37.447 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:37.447 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:37.447 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:37.447 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:37.447 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:33:37.447 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:37.447 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:37.447 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:37.447 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:37.447 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:37.447 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:37.447 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:37.447 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:37.447 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:37.447 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:37.447 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:33:37.447 11:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:45.628 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:45.628 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:45.628 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:45.628 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:45.629 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:45.629 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:45.629 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:45.629 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:45.629 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:45.629 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:45.629 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:45.629 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:45.629 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:33:45.629 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:45.629 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:45.629 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:45.629 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:45.629 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:45.629 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:45.629 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:45.629 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:45.629 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:45.629 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:45.629 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:45.629 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:45.629 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:45.629 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:45.629 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:45.629 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:45.629 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:45.629 11:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:45.629 11:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:45.629 11:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:45.629 11:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:45.629 11:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:45.629 11:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:45.629 11:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:45.629 11:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:45.629 11:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:45.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:45.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:33:45.629 00:33:45.629 --- 10.0.0.2 ping statistics --- 00:33:45.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:45.629 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:33:45.629 11:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:45.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:45.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:33:45.629 00:33:45.629 --- 10.0.0.1 ping statistics --- 00:33:45.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:45.629 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:33:45.629 11:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:45.629 11:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:33:45.629 11:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:45.629 11:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:45.629 11:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:45.629 11:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:45.629 11:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:45.629 11:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:45.629 11:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:45.629 11:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:33:45.629 11:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:45.629 11:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:45.629 11:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:45.629 11:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2992997 00:33:45.629 11:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2992997 00:33:45.629 11:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:45.629 11:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2992997 ']' 00:33:45.629 11:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:45.629 11:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:45.629 11:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:45.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:45.629 11:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:45.629 11:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:45.629 [2024-11-20 11:34:37.320617] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:45.629 [2024-11-20 11:34:37.321692] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:33:45.629 [2024-11-20 11:34:37.321734] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:45.629 [2024-11-20 11:34:37.421257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:45.629 [2024-11-20 11:34:37.456619] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:45.629 [2024-11-20 11:34:37.456649] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:45.629 [2024-11-20 11:34:37.456657] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:45.629 [2024-11-20 11:34:37.456663] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:45.629 [2024-11-20 11:34:37.456669] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:45.629 [2024-11-20 11:34:37.457256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:45.629 [2024-11-20 11:34:37.512591] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:45.629 [2024-11-20 11:34:37.512845] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:45.629 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:45.629 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:33:45.629 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:45.629 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:45.629 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:45.629 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:45.629 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:33:45.629 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:33:45.629 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.629 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:45.629 [2024-11-20 11:34:38.162008] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:45.629 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.629 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:45.629 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.629 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:45.629 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.629 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:45.629 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.629 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:45.629 [2024-11-20 11:34:38.190321] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:45.630 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.630 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:45.630 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.630 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:45.630 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.630 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:33:45.630 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.630 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:45.630 malloc0 00:33:45.630 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.630 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:33:45.630 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.630 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:45.630 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.630 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:33:45.630 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:33:45.630 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:45.630 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:45.630 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:45.630 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:45.630 { 00:33:45.630 "params": { 00:33:45.630 "name": "Nvme$subsystem", 00:33:45.630 "trtype": "$TEST_TRANSPORT", 00:33:45.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:45.630 "adrfam": "ipv4", 00:33:45.630 "trsvcid": "$NVMF_PORT", 00:33:45.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:45.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:45.630 "hdgst": ${hdgst:-false}, 00:33:45.630 "ddgst": ${ddgst:-false} 00:33:45.630 }, 00:33:45.630 "method": "bdev_nvme_attach_controller" 00:33:45.630 } 00:33:45.630 EOF 00:33:45.630 )") 00:33:45.630 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:45.630 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:45.630 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:45.630 11:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:45.630 "params": { 00:33:45.630 "name": "Nvme1", 00:33:45.630 "trtype": "tcp", 00:33:45.630 "traddr": "10.0.0.2", 00:33:45.630 "adrfam": "ipv4", 00:33:45.630 "trsvcid": "4420", 00:33:45.630 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:45.630 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:45.630 "hdgst": false, 00:33:45.630 "ddgst": false 00:33:45.630 }, 00:33:45.630 "method": "bdev_nvme_attach_controller" 00:33:45.630 }' 00:33:45.630 [2024-11-20 11:34:38.294922] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:33:45.630 [2024-11-20 11:34:38.294986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2993308 ] 00:33:45.889 [2024-11-20 11:34:38.386517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:45.889 [2024-11-20 11:34:38.425844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:46.148 Running I/O for 10 seconds... 00:33:48.031 6595.00 IOPS, 51.52 MiB/s [2024-11-20T10:34:41.714Z] 6612.50 IOPS, 51.66 MiB/s [2024-11-20T10:34:43.099Z] 6641.33 IOPS, 51.89 MiB/s [2024-11-20T10:34:44.041Z] 6645.50 IOPS, 51.92 MiB/s [2024-11-20T10:34:44.983Z] 6989.60 IOPS, 54.61 MiB/s [2024-11-20T10:34:45.925Z] 7438.00 IOPS, 58.11 MiB/s [2024-11-20T10:34:46.865Z] 7755.43 IOPS, 60.59 MiB/s [2024-11-20T10:34:47.805Z] 7994.25 IOPS, 62.46 MiB/s [2024-11-20T10:34:48.755Z] 8180.89 IOPS, 63.91 MiB/s [2024-11-20T10:34:48.755Z] 8328.80 IOPS, 65.07 MiB/s 00:33:56.013 Latency(us) 00:33:56.013 [2024-11-20T10:34:48.755Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:56.013 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:33:56.013 Verification LBA range: start 0x0 length 0x1000 00:33:56.013 Nvme1n1 : 10.01 8333.97 65.11 0.00 0.00 15313.41 1740.80 26978.99 00:33:56.013 [2024-11-20T10:34:48.755Z] =================================================================================================================== 00:33:56.013 [2024-11-20T10:34:48.755Z] Total : 8333.97 65.11 0.00 0.00 15313.41 1740.80 26978.99 00:33:56.274 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2995245 00:33:56.274 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:33:56.274 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:56.274 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:33:56.274 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:33:56.274 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:56.274 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:56.274 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:56.274 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:56.274 { 00:33:56.274 "params": { 00:33:56.274 "name": "Nvme$subsystem", 00:33:56.274 "trtype": "$TEST_TRANSPORT", 00:33:56.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:56.274 "adrfam": "ipv4", 00:33:56.274 "trsvcid": "$NVMF_PORT", 00:33:56.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:56.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:56.274 "hdgst": ${hdgst:-false}, 00:33:56.274 "ddgst": ${ddgst:-false} 00:33:56.274 }, 00:33:56.274 "method": "bdev_nvme_attach_controller" 00:33:56.274 } 00:33:56.274 EOF 00:33:56.274 )") 00:33:56.274 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:56.274 [2024-11-20 11:34:48.833584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.274 [2024-11-20 11:34:48.833610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.274 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:56.274 [2024-11-20 11:34:48.841562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.274 [2024-11-20 11:34:48.841579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.274 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:56.274 11:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:56.275 "params": { 00:33:56.275 "name": "Nvme1", 00:33:56.275 "trtype": "tcp", 00:33:56.275 "traddr": "10.0.0.2", 00:33:56.275 "adrfam": "ipv4", 00:33:56.275 "trsvcid": "4420", 00:33:56.275 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:56.275 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:56.275 "hdgst": false, 00:33:56.275 "ddgst": false 00:33:56.275 }, 00:33:56.275 "method": "bdev_nvme_attach_controller" 00:33:56.275 }' 00:33:56.275 [2024-11-20 11:34:48.849557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.275 [2024-11-20 11:34:48.849574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.275 [2024-11-20 11:34:48.857554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.275 [2024-11-20 11:34:48.857570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.275 [2024-11-20 11:34:48.865553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.275 [2024-11-20 11:34:48.865569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.275 [2024-11-20 11:34:48.876230] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:33:56.275 [2024-11-20 11:34:48.876279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2995245 ] 00:33:56.275 [2024-11-20 11:34:48.877556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.275 [2024-11-20 11:34:48.877573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.275 [2024-11-20 11:34:48.885554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.275 [2024-11-20 11:34:48.885571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.275 [2024-11-20 11:34:48.893554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.275 [2024-11-20 11:34:48.893570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.275 [2024-11-20 11:34:48.901553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.275 [2024-11-20 11:34:48.901568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.275 [2024-11-20 11:34:48.909555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.275 [2024-11-20 11:34:48.909571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.275 [2024-11-20 11:34:48.917553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.275 [2024-11-20 11:34:48.917568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.275 [2024-11-20 11:34:48.925553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.275 [2024-11-20 11:34:48.925568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.275 [2024-11-20 11:34:48.933553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.275 [2024-11-20 11:34:48.933569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.275 [2024-11-20 11:34:48.941553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.275 [2024-11-20 11:34:48.941569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.275 [2024-11-20 11:34:48.949552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.275 [2024-11-20 11:34:48.949568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.275 [2024-11-20 11:34:48.957554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.275 [2024-11-20 11:34:48.957570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.275 [2024-11-20 11:34:48.961202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:56.275 [2024-11-20 11:34:48.965554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.275 [2024-11-20 11:34:48.965571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.275 [2024-11-20 11:34:48.973554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.275 [2024-11-20 11:34:48.973571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.275 [2024-11-20 11:34:48.981553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.275 [2024-11-20 11:34:48.981569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.275 [2024-11-20 11:34:48.989553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.275 [2024-11-20 11:34:48.989569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.275 [2024-11-20 11:34:48.990343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:56.275 [2024-11-20 11:34:48.997555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.275 [2024-11-20 11:34:48.997571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.275 [2024-11-20 11:34:49.005556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.275 [2024-11-20 11:34:49.005574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.536 [2024-11-20 11:34:49.013556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.536 [2024-11-20 11:34:49.013575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.536 [2024-11-20 11:34:49.021553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.536 [2024-11-20 11:34:49.021570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.536 [2024-11-20 11:34:49.029554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.536 [2024-11-20 11:34:49.029571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.536 [2024-11-20 11:34:49.037553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.536 [2024-11-20 11:34:49.037570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.536 [2024-11-20 11:34:49.045553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.536 [2024-11-20 11:34:49.045569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.536 [2024-11-20 11:34:49.053549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.536 [2024-11-20 11:34:49.053564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.536 [2024-11-20 11:34:49.061554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.536 [2024-11-20 11:34:49.061571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.536 [2024-11-20 11:34:49.069552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.536 [2024-11-20 11:34:49.069568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.536 [2024-11-20 11:34:49.077554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.536 [2024-11-20 11:34:49.077570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.536 [2024-11-20 11:34:49.085554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.536 [2024-11-20 11:34:49.085570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.536 [2024-11-20 11:34:49.093553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.536 [2024-11-20 11:34:49.093568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.536 [2024-11-20 11:34:49.101552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.536 [2024-11-20 11:34:49.101568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.536 [2024-11-20 11:34:49.109553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.536 [2024-11-20 11:34:49.109568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.536 [2024-11-20 11:34:49.117552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.536 [2024-11-20 11:34:49.117566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.536 [2024-11-20 11:34:49.125553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.536 [2024-11-20 11:34:49.125569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.536 [2024-11-20 11:34:49.133553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.536 [2024-11-20 11:34:49.133569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.536 [2024-11-20 11:34:49.141551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.536 [2024-11-20 11:34:49.141567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.536 [2024-11-20 11:34:49.149549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.536 [2024-11-20 11:34:49.149563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.536 [2024-11-20 11:34:49.157553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.536 [2024-11-20 11:34:49.157570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.536 Running I/O for 5 seconds... 00:33:56.536 [2024-11-20 11:34:49.165553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.536 [2024-11-20 11:34:49.165568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.536 [2024-11-20 11:34:49.176241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.536 [2024-11-20 11:34:49.176260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.536 [2024-11-20 11:34:49.189702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.536 [2024-11-20 11:34:49.189722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.536 [2024-11-20 11:34:49.196386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.536 [2024-11-20 11:34:49.196405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.536 [2024-11-20 11:34:49.209032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.536 [2024-11-20 11:34:49.209050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.536 [2024-11-20 11:34:49.222498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.536 [2024-11-20 11:34:49.222517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.536 [2024-11-20 11:34:49.233861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.536 [2024-11-20 11:34:49.233879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.536 [2024-11-20 11:34:49.246594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.536 [2024-11-20 11:34:49.246612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.536 [2024-11-20 11:34:49.258405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.536 [2024-11-20 11:34:49.258423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.536 [2024-11-20 11:34:49.271279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.536 [2024-11-20 11:34:49.271297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.797 [2024-11-20 11:34:49.279616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.797 [2024-11-20 11:34:49.279634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.797 [2024-11-20 11:34:49.288856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.797 [2024-11-20 11:34:49.288880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.797 [2024-11-20 11:34:49.302235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.797 [2024-11-20 11:34:49.302252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.797 [2024-11-20 11:34:49.315129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.797 [2024-11-20 11:34:49.315147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.797 [2024-11-20 11:34:49.325619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.797 [2024-11-20 11:34:49.325637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.797 [2024-11-20 11:34:49.331800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.797 [2024-11-20 11:34:49.331819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.797 [2024-11-20 11:34:49.345300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.797 [2024-11-20 11:34:49.345318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.797 [2024-11-20 11:34:49.358318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.797 [2024-11-20 11:34:49.358336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.797 [2024-11-20 11:34:49.370863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.797 [2024-11-20 11:34:49.370881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.797 [2024-11-20 11:34:49.382356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.797 [2024-11-20 11:34:49.382375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.797 [2024-11-20 11:34:49.394982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.797 [2024-11-20 11:34:49.395000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.797 [2024-11-20 11:34:49.406225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.797 [2024-11-20 11:34:49.406243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.797 [2024-11-20 11:34:49.418706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.797 [2024-11-20 11:34:49.418724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.797 [2024-11-20 11:34:49.430346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.797 [2024-11-20 11:34:49.430365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.797 [2024-11-20 11:34:49.442920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.797 [2024-11-20 11:34:49.442939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.797 [2024-11-20 11:34:49.453972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.797 [2024-11-20 11:34:49.453990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.797 [2024-11-20 11:34:49.466743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.797 [2024-11-20 11:34:49.466761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.797 [2024-11-20 11:34:49.477882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.797 [2024-11-20 11:34:49.477900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.797 [2024-11-20 11:34:49.490489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.797 [2024-11-20 11:34:49.490507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.797 [2024-11-20 11:34:49.501739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.797 [2024-11-20 11:34:49.501758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.797 [2024-11-20 11:34:49.507976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.797 [2024-11-20 11:34:49.507998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.797 [2024-11-20 11:34:49.517049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.797 [2024-11-20 11:34:49.517067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:56.797 [2024-11-20 11:34:49.530339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:56.797 [2024-11-20 11:34:49.530356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.057 [2024-11-20 11:34:49.542750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.057 [2024-11-20 11:34:49.542768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.057 [2024-11-20 11:34:49.553934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.057 [2024-11-20 11:34:49.553952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.057 [2024-11-20 11:34:49.566304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.057 [2024-11-20 11:34:49.566322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.057 [2024-11-20 11:34:49.578408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.057 [2024-11-20 11:34:49.578426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.057 [2024-11-20 11:34:49.589635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.057 [2024-11-20 11:34:49.589652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.058 [2024-11-20 11:34:49.596051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.058 [2024-11-20 11:34:49.596068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.058 [2024-11-20 11:34:49.609373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.058 [2024-11-20 11:34:49.609392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.058 [2024-11-20 11:34:49.622706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.058 [2024-11-20 11:34:49.622724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.058 [2024-11-20 11:34:49.633449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.058 [2024-11-20 11:34:49.633467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.058 [2024-11-20 11:34:49.639720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.058 [2024-11-20 11:34:49.639739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.058 [2024-11-20 11:34:49.648528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.058 [2024-11-20 11:34:49.648545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.058 [2024-11-20 11:34:49.661974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.058 [2024-11-20 11:34:49.661991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.058 [2024-11-20 11:34:49.674428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.058 [2024-11-20 11:34:49.674446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.058 [2024-11-20 11:34:49.686355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.058 [2024-11-20 11:34:49.686373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.058 [2024-11-20 11:34:49.698939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.058 [2024-11-20 11:34:49.698957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.058 [2024-11-20 11:34:49.708216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.058 [2024-11-20 11:34:49.708234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.058 [2024-11-20 11:34:49.721502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.058 [2024-11-20 11:34:49.721525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.058 [2024-11-20 11:34:49.728179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.058 [2024-11-20 11:34:49.728197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.058 [2024-11-20 11:34:49.741816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.058 [2024-11-20 11:34:49.741834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.058 [2024-11-20 11:34:49.754611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.058 [2024-11-20 11:34:49.754629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.058 [2024-11-20 11:34:49.765637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.058 [2024-11-20 11:34:49.765656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.058 [2024-11-20 11:34:49.778246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.058 [2024-11-20 11:34:49.778265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.058 [2024-11-20 11:34:49.789527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.058 [2024-11-20 11:34:49.789545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.318 [2024-11-20 11:34:49.802606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.318 [2024-11-20 11:34:49.802625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.318 [2024-11-20 11:34:49.813365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.318 [2024-11-20 11:34:49.813382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.318 [2024-11-20 11:34:49.826633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.318 [2024-11-20 11:34:49.826651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.318 [2024-11-20 11:34:49.837881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.318 [2024-11-20 11:34:49.837899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.318 [2024-11-20 11:34:49.850855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.318 [2024-11-20 11:34:49.850874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.318 [2024-11-20 11:34:49.861915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.318 [2024-11-20 11:34:49.861933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.318 [2024-11-20 11:34:49.875034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.318 [2024-11-20 11:34:49.875052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.318 [2024-11-20 11:34:49.885541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.318 [2024-11-20 11:34:49.885559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.318 [2024-11-20 11:34:49.898836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.318 [2024-11-20 11:34:49.898856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.318 [2024-11-20 11:34:49.909228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.318 [2024-11-20 11:34:49.909246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.318 [2024-11-20 11:34:49.922687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.318 [2024-11-20 11:34:49.922706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.318 [2024-11-20 11:34:49.933939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.318 [2024-11-20 11:34:49.933956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.318 [2024-11-20 11:34:49.946689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.318 [2024-11-20 11:34:49.946707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.318 [2024-11-20 11:34:49.957672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.318 [2024-11-20 11:34:49.957690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.318 [2024-11-20 11:34:49.963835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.318 [2024-11-20 11:34:49.963852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.318 [2024-11-20 11:34:49.972770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.318 [2024-11-20 11:34:49.972787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.318 [2024-11-20 11:34:49.986181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.318 [2024-11-20 11:34:49.986199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.318 [2024-11-20 11:34:49.998224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.318 [2024-11-20 11:34:49.998242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.318 [2024-11-20 11:34:50.009732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.318 [2024-11-20 11:34:50.009751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.318 [2024-11-20 11:34:50.015893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.318 [2024-11-20 11:34:50.015911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.318 [2024-11-20 11:34:50.025123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.318 [2024-11-20 11:34:50.025141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.318 [2024-11-20 11:34:50.038317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.318 [2024-11-20 11:34:50.038335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.318 [2024-11-20 11:34:50.050612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.318 [2024-11-20 11:34:50.050632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.579 [2024-11-20 11:34:50.061917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.579 [2024-11-20 11:34:50.061936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.579 [2024-11-20 11:34:50.074545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.579 [2024-11-20 11:34:50.074564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.579 [2024-11-20 11:34:50.085368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.579 [2024-11-20 11:34:50.085387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.579 [2024-11-20 11:34:50.098604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.579 [2024-11-20 11:34:50.098623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.579 [2024-11-20 11:34:50.107484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.579 [2024-11-20 11:34:50.107502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.579 [2024-11-20 11:34:50.117570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.579 [2024-11-20 11:34:50.117589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.579 [2024-11-20 11:34:50.123593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.579 [2024-11-20 11:34:50.123611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.579 [2024-11-20 11:34:50.132765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.579 [2024-11-20 11:34:50.132783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.579 [2024-11-20 11:34:50.146350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.579 [2024-11-20 11:34:50.146369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.579 [2024-11-20 11:34:50.157458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.579 [2024-11-20 11:34:50.157478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.579 18524.00 IOPS, 144.72 MiB/s [2024-11-20T10:34:50.321Z] [2024-11-20 11:34:50.170520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.579 [2024-11-20 11:34:50.170539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.579 [2024-11-20 11:34:50.181512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.579 [2024-11-20 11:34:50.181530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.579 [2024-11-20 11:34:50.187797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.579 [2024-11-20 11:34:50.187814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.579 [2024-11-20 11:34:50.196382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.579 [2024-11-20 11:34:50.196401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.579 [2024-11-20 11:34:50.209905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.579 [2024-11-20 11:34:50.209923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.579 [2024-11-20 11:34:50.222645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.579 [2024-11-20 11:34:50.222663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.579 [2024-11-20 11:34:50.233798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.579 [2024-11-20 11:34:50.233817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.579 [2024-11-20 11:34:50.240042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.580 [2024-11-20 11:34:50.240061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.580 [2024-11-20 11:34:50.253106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.580 [2024-11-20 11:34:50.253124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.580 [2024-11-20 11:34:50.266060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.580 [2024-11-20 11:34:50.266078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.580 [2024-11-20 11:34:50.278774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.580 [2024-11-20 11:34:50.278792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.580 [2024-11-20 11:34:50.289658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.580 [2024-11-20 11:34:50.289677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.580 [2024-11-20 11:34:50.295818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.580 [2024-11-20 11:34:50.295837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.580 [2024-11-20 11:34:50.308942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.580 [2024-11-20 11:34:50.308960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.840 [2024-11-20 11:34:50.322303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.840 [2024-11-20 11:34:50.322322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.840 [2024-11-20 11:34:50.331806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.840 [2024-11-20 11:34:50.331824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.840 [2024-11-20 11:34:50.341028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.840 [2024-11-20 11:34:50.341050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.840 [2024-11-20 11:34:50.354257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.840 [2024-11-20 11:34:50.354276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.840 [2024-11-20 11:34:50.366895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.840 [2024-11-20 11:34:50.366914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.840 [2024-11-20 11:34:50.378068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.840 [2024-11-20 11:34:50.378086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.840 [2024-11-20 11:34:50.390840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.840 [2024-11-20 11:34:50.390858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.840 [2024-11-20 11:34:50.401436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.840 [2024-11-20 11:34:50.401455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.840 [2024-11-20 11:34:50.414454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.840 [2024-11-20 11:34:50.414472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.840 [2024-11-20 11:34:50.425849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.840 [2024-11-20 11:34:50.425867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.840 [2024-11-20 11:34:50.438683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.840 [2024-11-20 11:34:50.438702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.840 [2024-11-20 11:34:50.449848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.840 [2024-11-20 11:34:50.449867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.840 [2024-11-20 11:34:50.462436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.840 [2024-11-20 11:34:50.462453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.840 [2024-11-20 11:34:50.474121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.840 [2024-11-20 11:34:50.474140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.840 [2024-11-20 11:34:50.486879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.841 [2024-11-20 11:34:50.486898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.841 [2024-11-20 11:34:50.497248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.841 [2024-11-20 11:34:50.497276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.841 [2024-11-20 11:34:50.510674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.841 [2024-11-20 11:34:50.510692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.841 [2024-11-20 11:34:50.521307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.841 [2024-11-20 11:34:50.521326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.841 [2024-11-20 11:34:50.534583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.841 [2024-11-20 11:34:50.534601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.841 [2024-11-20 11:34:50.546882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.841 [2024-11-20 11:34:50.546900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.841 [2024-11-20 11:34:50.557189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.841 [2024-11-20 11:34:50.557208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:57.841 [2024-11-20 11:34:50.570527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:57.841 [2024-11-20 11:34:50.570550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.101 [2024-11-20 11:34:50.581169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.101 [2024-11-20 11:34:50.581188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.101 [2024-11-20 11:34:50.594399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.101 [2024-11-20 11:34:50.594417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.101 [2024-11-20 11:34:50.606518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.101 [2024-11-20 11:34:50.606536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.101 [2024-11-20 11:34:50.617732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.101 [2024-11-20 11:34:50.617751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.101 [2024-11-20 11:34:50.623920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.101 [2024-11-20 11:34:50.623939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.101 [2024-11-20 11:34:50.637366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.101 [2024-11-20 11:34:50.637384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.101 [2024-11-20 11:34:50.650252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.101 [2024-11-20 11:34:50.650270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.101 [2024-11-20 11:34:50.662547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.101 [2024-11-20 11:34:50.662566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.101 [2024-11-20 11:34:50.673998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.101 [2024-11-20 11:34:50.674016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.101 [2024-11-20 11:34:50.686897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.101 [2024-11-20 11:34:50.686915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.101 [2024-11-20 11:34:50.697287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.101 [2024-11-20 11:34:50.697306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.101 [2024-11-20 11:34:50.710373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.101 [2024-11-20 11:34:50.710392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.101 [2024-11-20 11:34:50.722795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.101 [2024-11-20 11:34:50.722813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.101 [2024-11-20 11:34:50.734600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.101 [2024-11-20 11:34:50.734618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.101 [2024-11-20 11:34:50.746527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.101 [2024-11-20 11:34:50.746545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.101 [2024-11-20 11:34:50.758224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.101 [2024-11-20 11:34:50.758242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.102 [2024-11-20 11:34:50.770522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.102 [2024-11-20 11:34:50.770540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.102 [2024-11-20 11:34:50.781616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.102 [2024-11-20 11:34:50.781635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.102 [2024-11-20 11:34:50.795054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.102 [2024-11-20 11:34:50.795078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.102 [2024-11-20 11:34:50.804545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.102 [2024-11-20 11:34:50.804563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.102 [2024-11-20 11:34:50.818157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.102 [2024-11-20 11:34:50.818181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.102 [2024-11-20 11:34:50.830171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.102 [2024-11-20 11:34:50.830189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.363 [2024-11-20 11:34:50.842635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.363 [2024-11-20 11:34:50.842654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.363 [2024-11-20 11:34:50.854316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.363 [2024-11-20 11:34:50.854334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.363 [2024-11-20 11:34:50.866466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.363 [2024-11-20 11:34:50.866483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.363 [2024-11-20 11:34:50.877573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.363 [2024-11-20 11:34:50.877592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.363 [2024-11-20 11:34:50.883818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.363 [2024-11-20 11:34:50.883836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.363 [2024-11-20 11:34:50.892944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.363 [2024-11-20 11:34:50.892962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.363 [2024-11-20 11:34:50.906146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.363 [2024-11-20 11:34:50.906170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.363 [2024-11-20 11:34:50.918751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.363 [2024-11-20 11:34:50.918769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.363 [2024-11-20 11:34:50.929210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.363 [2024-11-20 11:34:50.929229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.363 [2024-11-20 11:34:50.942289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.363 [2024-11-20 11:34:50.942307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.363 [2024-11-20 11:34:50.954399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.363 [2024-11-20 11:34:50.954417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.363 [2024-11-20 11:34:50.966778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.363 [2024-11-20 11:34:50.966796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.363 [2024-11-20 11:34:50.977718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.363 [2024-11-20 11:34:50.977736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.363 [2024-11-20 11:34:50.983676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.363 [2024-11-20 11:34:50.983695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.363 [2024-11-20 11:34:50.992932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.363 [2024-11-20 11:34:50.992950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.363 [2024-11-20 11:34:51.006330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.363 [2024-11-20 11:34:51.006352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.363 [2024-11-20 11:34:51.018742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.363 [2024-11-20 11:34:51.018760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.363 [2024-11-20 11:34:51.029290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.363 [2024-11-20 11:34:51.029309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.363 [2024-11-20 11:34:51.042359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.363 [2024-11-20 11:34:51.042377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.363 [2024-11-20 11:34:51.054773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.363 [2024-11-20 11:34:51.054791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.363 [2024-11-20 11:34:51.065857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.363 [2024-11-20 11:34:51.065875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.363 [2024-11-20 11:34:51.078679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.363 [2024-11-20 11:34:51.078697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.363 [2024-11-20 11:34:51.088623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.363 [2024-11-20 11:34:51.088641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.625 [2024-11-20 11:34:51.101926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.625 [2024-11-20 11:34:51.101945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.625 [2024-11-20 11:34:51.114722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.625 [2024-11-20 11:34:51.114741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.625 [2024-11-20 11:34:51.125728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.625 [2024-11-20 11:34:51.125747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.625 [2024-11-20 11:34:51.132071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.625 [2024-11-20 11:34:51.132089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.625 [2024-11-20 11:34:51.145326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.625 [2024-11-20 11:34:51.145344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.625 [2024-11-20 11:34:51.158350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.625 [2024-11-20 11:34:51.158368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.625 [2024-11-20 11:34:51.171009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.625 [2024-11-20 11:34:51.171028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.625 18585.50 IOPS, 145.20 MiB/s [2024-11-20T10:34:51.367Z] [2024-11-20 11:34:51.181308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.625 [2024-11-20 11:34:51.181326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.625 [2024-11-20 11:34:51.194707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.625 [2024-11-20 11:34:51.194725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.625 [2024-11-20 11:34:51.205214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.625 [2024-11-20 11:34:51.205232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.625 [2024-11-20 11:34:51.218563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.625 [2024-11-20 11:34:51.218581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.625 [2024-11-20 11:34:51.230888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.625 [2024-11-20 11:34:51.230906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.625 [2024-11-20 11:34:51.242798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.625 [2024-11-20 11:34:51.242816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.625 [2024-11-20 11:34:51.254026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.625 [2024-11-20 11:34:51.254043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.625 [2024-11-20 11:34:51.266918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.625 [2024-11-20 11:34:51.266936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.625 [2024-11-20 11:34:51.277481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.625 [2024-11-20 11:34:51.277500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.625 [2024-11-20 11:34:51.283758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.625 [2024-11-20 11:34:51.283777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.625 [2024-11-20 11:34:51.293052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.625 [2024-11-20 11:34:51.293070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.625 [2024-11-20 11:34:51.306406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.625 [2024-11-20 11:34:51.306424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.625 [2024-11-20 11:34:51.317799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.625 [2024-11-20 11:34:51.317818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.625 [2024-11-20 11:34:51.324038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.625 [2024-11-20 11:34:51.324056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.625 [2024-11-20 11:34:51.336981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.625 [2024-11-20 11:34:51.336998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.625 [2024-11-20 11:34:51.350258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.625 [2024-11-20 11:34:51.350275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.625 [2024-11-20 11:34:51.361776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.625 [2024-11-20 11:34:51.361794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.887 [2024-11-20 11:34:51.367963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.887 [2024-11-20 11:34:51.367982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.887 [2024-11-20 11:34:51.377143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.887 [2024-11-20 11:34:51.377168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.887 [2024-11-20 11:34:51.390291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.887 [2024-11-20 11:34:51.390308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.887 [2024-11-20 11:34:51.401453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.887 [2024-11-20 11:34:51.401472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.887 [2024-11-20 11:34:51.407739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.887 [2024-11-20 11:34:51.407757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.887 [2024-11-20 11:34:51.417419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.887 [2024-11-20 11:34:51.417437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.887 [2024-11-20 11:34:51.430871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.887 [2024-11-20 11:34:51.430888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.887 [2024-11-20 11:34:51.441536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.887 [2024-11-20 11:34:51.441553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.887 [2024-11-20 11:34:51.454627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.887 [2024-11-20 11:34:51.454645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.887 [2024-11-20 11:34:51.466494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.887 [2024-11-20 11:34:51.466512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.887 [2024-11-20 11:34:51.477605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.887 [2024-11-20 11:34:51.477623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.887 [2024-11-20 11:34:51.483775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.887 [2024-11-20 11:34:51.483793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.887 [2024-11-20 11:34:51.492617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.887 [2024-11-20 11:34:51.492634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.887 [2024-11-20 11:34:51.505965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.887 [2024-11-20 11:34:51.505982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.887 [2024-11-20 11:34:51.518679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.887 [2024-11-20 11:34:51.518697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.887 [2024-11-20 11:34:51.529542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.887 [2024-11-20 11:34:51.529560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.887 [2024-11-20 11:34:51.535657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.887 [2024-11-20 11:34:51.535675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.887 [2024-11-20 11:34:51.544804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.887 [2024-11-20 11:34:51.544822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.887 [2024-11-20 11:34:51.558056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.887 [2024-11-20 11:34:51.558074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.887 [2024-11-20 11:34:51.570907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.887 [2024-11-20 11:34:51.570924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.887 [2024-11-20 11:34:51.580744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.887 [2024-11-20 11:34:51.580763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.887 [2024-11-20 11:34:51.593988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.887 [2024-11-20 11:34:51.594006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.887 [2024-11-20 11:34:51.607288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.887 [2024-11-20 11:34:51.607307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:58.887 [2024-11-20 11:34:51.616184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:58.887 [2024-11-20 11:34:51.616202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.147 [2024-11-20 11:34:51.629078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.147 [2024-11-20 11:34:51.629096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.147 [2024-11-20 11:34:51.642955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.147 [2024-11-20 11:34:51.642974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.147 [2024-11-20 11:34:51.653384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.147 [2024-11-20 11:34:51.653403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.147 [2024-11-20 11:34:51.666431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.147 [2024-11-20 11:34:51.666448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.147 [2024-11-20 11:34:51.678781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.147 [2024-11-20 11:34:51.678800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.147 [2024-11-20 11:34:51.690293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.147 [2024-11-20 11:34:51.690311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.147 [2024-11-20 11:34:51.702990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.147 [2024-11-20 11:34:51.703009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.147 [2024-11-20 11:34:51.713250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.147 [2024-11-20 11:34:51.713268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.147 [2024-11-20 11:34:51.726483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.147 [2024-11-20 11:34:51.726500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.147 [2024-11-20 11:34:51.737741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.147 [2024-11-20 11:34:51.737759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.147 [2024-11-20 11:34:51.743915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.147 [2024-11-20 11:34:51.743934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.147 [2024-11-20 11:34:51.753177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.147 [2024-11-20 11:34:51.753196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.147 [2024-11-20 11:34:51.766417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.147 [2024-11-20 11:34:51.766434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.147 [2024-11-20 11:34:51.778936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.147 [2024-11-20 11:34:51.778954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.147 [2024-11-20 11:34:51.789510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.147 [2024-11-20 11:34:51.789528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.147 [2024-11-20 11:34:51.795715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.147 [2024-11-20 11:34:51.795734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.147 [2024-11-20 11:34:51.804916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.147 [2024-11-20 11:34:51.804933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.147 [2024-11-20 11:34:51.817976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.147 [2024-11-20 11:34:51.817993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.147 [2024-11-20 11:34:51.830545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.147 [2024-11-20 11:34:51.830563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.147 [2024-11-20 11:34:51.841565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.147 [2024-11-20 11:34:51.841587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.147 [2024-11-20 11:34:51.855019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.147 [2024-11-20 11:34:51.855038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.147 [2024-11-20 11:34:51.865173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.148 [2024-11-20 11:34:51.865192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.148 [2024-11-20 11:34:51.878541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.148 [2024-11-20 11:34:51.878560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.409 [2024-11-20 11:34:51.889688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.409 [2024-11-20 11:34:51.889708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.409 [2024-11-20 11:34:51.895922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.409 [2024-11-20 11:34:51.895941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.409 [2024-11-20 11:34:51.904914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.409 [2024-11-20 11:34:51.904932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.409 [2024-11-20 11:34:51.918457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.409 [2024-11-20 11:34:51.918474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.409 [2024-11-20 11:34:51.929588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.409 [2024-11-20 11:34:51.929607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.409 [2024-11-20 11:34:51.942735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.409 [2024-11-20 11:34:51.942754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.409 [2024-11-20 11:34:51.953482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.409 [2024-11-20 11:34:51.953502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.409 [2024-11-20 11:34:51.959610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.409 [2024-11-20 11:34:51.959628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.409 [2024-11-20 11:34:51.968786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.409 [2024-11-20 11:34:51.968805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.409 [2024-11-20 11:34:51.982125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.409 [2024-11-20 11:34:51.982143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.409 [2024-11-20 11:34:51.994776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.409 [2024-11-20 11:34:51.994794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.409 [2024-11-20 11:34:52.006574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.409 [2024-11-20 11:34:52.006593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.409 [2024-11-20 11:34:52.018441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.409 [2024-11-20 11:34:52.018460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.409 [2024-11-20 11:34:52.029737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.409 [2024-11-20 11:34:52.029756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.409 [2024-11-20 11:34:52.036091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.409 [2024-11-20 11:34:52.036110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.409 [2024-11-20 11:34:52.049316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.409 [2024-11-20 11:34:52.049338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.409 [2024-11-20 11:34:52.062720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.410 [2024-11-20 11:34:52.062738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.410 [2024-11-20 11:34:52.073386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.410 [2024-11-20 11:34:52.073405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.410 [2024-11-20 11:34:52.086684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.410 [2024-11-20 11:34:52.086702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.410 [2024-11-20 11:34:52.097909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.410 [2024-11-20 11:34:52.097926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.410 [2024-11-20 11:34:52.110731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.410 [2024-11-20 11:34:52.110750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.410 [2024-11-20 11:34:52.122671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.410 [2024-11-20 11:34:52.122689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.410 [2024-11-20 11:34:52.133874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.410 [2024-11-20 11:34:52.133892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.410 [2024-11-20 11:34:52.146889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.410 [2024-11-20 11:34:52.146908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.671 [2024-11-20 11:34:52.155453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.671 [2024-11-20 11:34:52.155472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.671 [2024-11-20 11:34:52.165375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.671 [2024-11-20 11:34:52.165393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.671 18586.33 IOPS, 145.21 MiB/s [2024-11-20T10:34:52.413Z] [2024-11-20 11:34:52.178526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.671 [2024-11-20 11:34:52.178544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.671 [2024-11-20 11:34:52.189705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.671 [2024-11-20 11:34:52.189724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.671 [2024-11-20 11:34:52.195823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.671 [2024-11-20 11:34:52.195842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.671 [2024-11-20 11:34:52.204929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.671 [2024-11-20 11:34:52.204948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.671 [2024-11-20 11:34:52.218290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.671 [2024-11-20 11:34:52.218308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.671 [2024-11-20 11:34:52.230605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.671 [2024-11-20 11:34:52.230623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.671 [2024-11-20 11:34:52.240736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.671 [2024-11-20 11:34:52.240755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.671 [2024-11-20 11:34:52.254035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.671 [2024-11-20 11:34:52.254053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.671 [2024-11-20 11:34:52.266713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.671 [2024-11-20 11:34:52.266736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.671 [2024-11-20 11:34:52.277773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.671 [2024-11-20 11:34:52.277791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.671 [2024-11-20 11:34:52.290774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.671 [2024-11-20 11:34:52.290792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.671 [2024-11-20 11:34:52.301572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.671 [2024-11-20 11:34:52.301590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.671 [2024-11-20 11:34:52.307750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.671 [2024-11-20 11:34:52.307768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.671 [2024-11-20 11:34:52.317032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.671 [2024-11-20 11:34:52.317050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.671 [2024-11-20 11:34:52.330695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.671 [2024-11-20 11:34:52.330712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.671 [2024-11-20 11:34:52.341878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.671 [2024-11-20 11:34:52.341896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.671 [2024-11-20 11:34:52.354574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.671 [2024-11-20 11:34:52.354593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.671 [2024-11-20 11:34:52.365763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.671 [2024-11-20 11:34:52.365781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.671 [2024-11-20 11:34:52.372015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.671 [2024-11-20 11:34:52.372033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.671 [2024-11-20 11:34:52.379621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.671 [2024-11-20 11:34:52.379640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.671 [2024-11-20 11:34:52.389169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.671 [2024-11-20 11:34:52.389187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.671 [2024-11-20 11:34:52.402253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.671 [2024-11-20 11:34:52.402271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.933 [2024-11-20 11:34:52.415037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.933 [2024-11-20 11:34:52.415056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.933 [2024-11-20 11:34:52.426499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.933 [2024-11-20 11:34:52.426518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.933 [2024-11-20 11:34:52.438463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.933 [2024-11-20 11:34:52.438481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.933 [2024-11-20 11:34:52.450841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.933 [2024-11-20 11:34:52.450859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.933 [2024-11-20 11:34:52.461918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.933 [2024-11-20 11:34:52.461936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.933 [2024-11-20 11:34:52.474778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.933 [2024-11-20 11:34:52.474797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.933 [2024-11-20 11:34:52.485393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.933 [2024-11-20 11:34:52.485412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.933 [2024-11-20 11:34:52.498739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.933 [2024-11-20 11:34:52.498758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.933 [2024-11-20 11:34:52.509415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.933 [2024-11-20 11:34:52.509433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.933 [2024-11-20 11:34:52.522894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.933 [2024-11-20 11:34:52.522911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.933 [2024-11-20 11:34:52.533450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.933 [2024-11-20 11:34:52.533469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.933 [2024-11-20 11:34:52.546496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.933 [2024-11-20 11:34:52.546514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.933 [2024-11-20 11:34:52.557650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.933 [2024-11-20 11:34:52.557669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.933 [2024-11-20 11:34:52.563945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.933 [2024-11-20 11:34:52.563963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.933 [2024-11-20 11:34:52.577324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.933 [2024-11-20 11:34:52.577342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.933 [2024-11-20 11:34:52.590224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.933 [2024-11-20 11:34:52.590243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.933 [2024-11-20 11:34:52.602724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.933 [2024-11-20 11:34:52.602742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.933 [2024-11-20 11:34:52.614277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.933 [2024-11-20 11:34:52.614295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.933 [2024-11-20 11:34:52.626924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.933 [2024-11-20 11:34:52.626942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.933 [2024-11-20 11:34:52.637267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.933 [2024-11-20 11:34:52.637286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.933 [2024-11-20 11:34:52.650480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.933 [2024-11-20 11:34:52.650498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:59.933 [2024-11-20 11:34:52.661825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:59.933 [2024-11-20 11:34:52.661843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.195 [2024-11-20 11:34:52.674734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.195 [2024-11-20 11:34:52.674753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.195 [2024-11-20 11:34:52.685787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.195 [2024-11-20 11:34:52.685804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.195 [2024-11-20 11:34:52.691853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.195 [2024-11-20 11:34:52.691871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.195 [2024-11-20 11:34:52.705305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.195 [2024-11-20 11:34:52.705324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.195 [2024-11-20 11:34:52.718478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.195 [2024-11-20 11:34:52.718496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.195 [2024-11-20 11:34:52.730180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.195 [2024-11-20 11:34:52.730197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.195 [2024-11-20 11:34:52.742893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.195 [2024-11-20 11:34:52.742911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.195 [2024-11-20 11:34:52.753387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.195 [2024-11-20 11:34:52.753405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.195 [2024-11-20 11:34:52.766617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.195 [2024-11-20 11:34:52.766635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.195 [2024-11-20 11:34:52.777962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.195 [2024-11-20 11:34:52.777980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.195 [2024-11-20 11:34:52.790825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.195 [2024-11-20 11:34:52.790842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.195 [2024-11-20 11:34:52.801387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.195 [2024-11-20 11:34:52.801404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.195 [2024-11-20 11:34:52.814511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.195 [2024-11-20 11:34:52.814529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.195 [2024-11-20 11:34:52.826138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.195 [2024-11-20 11:34:52.826156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.195 [2024-11-20 11:34:52.838741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.195 [2024-11-20 11:34:52.838759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.195 [2024-11-20 11:34:52.850861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.195 [2024-11-20 11:34:52.850878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.195 [2024-11-20 11:34:52.861462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.195 [2024-11-20 11:34:52.861480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.195 [2024-11-20 11:34:52.874678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.195 [2024-11-20 11:34:52.874695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.195 [2024-11-20 11:34:52.886246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.195 [2024-11-20 11:34:52.886264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.195 [2024-11-20 11:34:52.898377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.195 [2024-11-20 11:34:52.898395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.195 [2024-11-20 11:34:52.909763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.195 [2024-11-20 11:34:52.909781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.195 [2024-11-20 11:34:52.916207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.195 [2024-11-20 11:34:52.916225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.195 [2024-11-20 11:34:52.929677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.195 [2024-11-20 11:34:52.929696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.456 [2024-11-20 11:34:52.936277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.456 [2024-11-20 11:34:52.936296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.456 [2024-11-20 11:34:52.949519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.456 [2024-11-20 11:34:52.949538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.456 [2024-11-20 11:34:52.962455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.456 [2024-11-20 11:34:52.962472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.456 [2024-11-20 11:34:52.974747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.456 [2024-11-20 11:34:52.974764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.456 [2024-11-20 11:34:52.986519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.456 [2024-11-20 11:34:52.986536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.456 [2024-11-20 11:34:52.997574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.456 [2024-11-20 11:34:52.997593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.456 [2024-11-20 11:34:53.010860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.456 [2024-11-20 11:34:53.010878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.456 [2024-11-20 11:34:53.021224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.456 [2024-11-20 11:34:53.021242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.456 [2024-11-20 11:34:53.034456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.456 [2024-11-20 11:34:53.034474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.456 [2024-11-20 11:34:53.045678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.456 [2024-11-20 11:34:53.045697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.456 [2024-11-20 11:34:53.051914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.456 [2024-11-20 11:34:53.051932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.456 [2024-11-20 11:34:53.060523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.456 [2024-11-20 11:34:53.060540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.456 [2024-11-20 11:34:53.073776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.456 [2024-11-20 11:34:53.073794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.456 [2024-11-20 11:34:53.080695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.456 [2024-11-20 11:34:53.080713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.456 [2024-11-20 11:34:53.094007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.456 [2024-11-20 11:34:53.094024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.456 [2024-11-20 11:34:53.106642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.456 [2024-11-20 11:34:53.106660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.456 [2024-11-20 11:34:53.118620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.456 [2024-11-20 11:34:53.118638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.456 [2024-11-20 11:34:53.129851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.456 [2024-11-20 11:34:53.129869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.456 [2024-11-20 11:34:53.142508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.456 [2024-11-20 11:34:53.142526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.456 [2024-11-20 11:34:53.154826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.456 [2024-11-20 11:34:53.154845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.456 [2024-11-20 11:34:53.164191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.456 [2024-11-20 11:34:53.164209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.456 18578.50 IOPS, 145.14 MiB/s [2024-11-20T10:34:53.198Z] [2024-11-20 11:34:53.177320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.456 [2024-11-20 11:34:53.177339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.456 [2024-11-20 11:34:53.190470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.456 [2024-11-20 11:34:53.190488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.717 [2024-11-20 11:34:53.201677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.717 [2024-11-20 11:34:53.201695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.717 [2024-11-20 11:34:53.207705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.717 [2024-11-20 11:34:53.207723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.717 [2024-11-20 11:34:53.216802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.717 [2024-11-20 11:34:53.216820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.717 [2024-11-20 11:34:53.230149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.717 [2024-11-20 11:34:53.230174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.717 [2024-11-20 11:34:53.241492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.717 [2024-11-20 11:34:53.241510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.717 [2024-11-20 11:34:53.247563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.717 [2024-11-20 11:34:53.247581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.717 [2024-11-20 11:34:53.256548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.717 [2024-11-20 11:34:53.256566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.717 [2024-11-20 11:34:53.269885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.717 [2024-11-20 11:34:53.269903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.717 [2024-11-20 11:34:53.282870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.717 [2024-11-20 11:34:53.282888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.717 [2024-11-20 11:34:53.294344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.717 [2024-11-20 11:34:53.294362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.717 [2024-11-20 11:34:53.306995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.717 [2024-11-20 11:34:53.307013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.717 [2024-11-20 11:34:53.317380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.717 [2024-11-20 11:34:53.317398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.717 [2024-11-20 11:34:53.330706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.717 [2024-11-20 11:34:53.330728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.717 [2024-11-20 11:34:53.341859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.717 [2024-11-20 11:34:53.341876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.717 [2024-11-20 11:34:53.354690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.717 [2024-11-20 11:34:53.354708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.717 [2024-11-20 11:34:53.365793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.717 [2024-11-20 11:34:53.365811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.717 [2024-11-20 11:34:53.371738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.717 [2024-11-20 11:34:53.371756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.717 [2024-11-20 11:34:53.380793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.717 [2024-11-20 11:34:53.380811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.717 [2024-11-20 11:34:53.394094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.717 [2024-11-20 11:34:53.394110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.717 [2024-11-20 11:34:53.406700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.717 [2024-11-20 11:34:53.406717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.717 [2024-11-20 11:34:53.415849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.717 [2024-11-20 11:34:53.415867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.717 [2024-11-20 11:34:53.424957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.717 [2024-11-20 11:34:53.424975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.717 [2024-11-20 11:34:53.438212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.717 [2024-11-20 11:34:53.438230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.717 [2024-11-20 11:34:53.450425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.717 [2024-11-20 11:34:53.450443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.978 [2024-11-20 11:34:53.462934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.978 [2024-11-20 11:34:53.462952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.978 [2024-11-20 11:34:53.474616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.978 [2024-11-20 11:34:53.474634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.978 [2024-11-20 11:34:53.486558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.978 [2024-11-20 11:34:53.486576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.978 [2024-11-20 11:34:53.497836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.978 [2024-11-20 11:34:53.497854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.978 [2024-11-20 11:34:53.510963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.978 [2024-11-20 11:34:53.510981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.978 [2024-11-20 11:34:53.519934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.978 [2024-11-20 11:34:53.519952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.978 [2024-11-20 11:34:53.529435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.978 [2024-11-20 11:34:53.529454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.978 [2024-11-20 11:34:53.542702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.978 [2024-11-20 11:34:53.542724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.978 [2024-11-20 11:34:53.554061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.978 [2024-11-20 11:34:53.554079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.978 [2024-11-20 11:34:53.566895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.978 [2024-11-20 11:34:53.566913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.978 [2024-11-20 11:34:53.577436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.978 [2024-11-20 11:34:53.577454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.978 [2024-11-20 11:34:53.590646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.978 [2024-11-20 11:34:53.590664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.978 [2024-11-20 11:34:53.601828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.978 [2024-11-20 11:34:53.601845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.978 [2024-11-20 11:34:53.614587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.978 [2024-11-20 11:34:53.614604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.978 [2024-11-20 11:34:53.625248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.978 [2024-11-20 11:34:53.625266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.978 [2024-11-20 11:34:53.638675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.978 [2024-11-20 11:34:53.638693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.978 [2024-11-20 11:34:53.649235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.978 [2024-11-20 11:34:53.649253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.978 [2024-11-20 11:34:53.662503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.978 [2024-11-20 11:34:53.662521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.978 [2024-11-20 11:34:53.673710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.978 [2024-11-20 11:34:53.673728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.978 [2024-11-20 11:34:53.680020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.978 [2024-11-20 11:34:53.680039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.978 [2024-11-20 11:34:53.689089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.979 [2024-11-20 11:34:53.689108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.979 [2024-11-20 11:34:53.702504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.979 [2024-11-20 11:34:53.702522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:00.979 [2024-11-20 11:34:53.714305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:00.979 [2024-11-20 11:34:53.714323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.239 [2024-11-20 11:34:53.726739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.239 [2024-11-20 11:34:53.726758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.239 [2024-11-20 11:34:53.737601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.239 [2024-11-20 11:34:53.737620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.239 [2024-11-20 11:34:53.750961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.239 [2024-11-20 11:34:53.750979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.239 [2024-11-20 11:34:53.762391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.239 [2024-11-20 11:34:53.762413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.239 [2024-11-20 11:34:53.774947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.239 [2024-11-20 11:34:53.774965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.239 [2024-11-20 11:34:53.784180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.239 [2024-11-20 11:34:53.784198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.239 [2024-11-20 11:34:53.797533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.239 [2024-11-20 11:34:53.797552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.239 [2024-11-20 11:34:53.803925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.239 [2024-11-20 11:34:53.803943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.239 [2024-11-20 11:34:53.817289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.239 [2024-11-20 11:34:53.817306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.239 [2024-11-20 11:34:53.830965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.239 [2024-11-20 11:34:53.830984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.239 [2024-11-20 11:34:53.841466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.239 [2024-11-20 11:34:53.841485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.239 [2024-11-20 11:34:53.847750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.239 [2024-11-20 11:34:53.847769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.239 [2024-11-20 11:34:53.856740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.239 [2024-11-20 11:34:53.856759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.239 [2024-11-20 11:34:53.869874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.239 [2024-11-20 11:34:53.869892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.239 [2024-11-20 11:34:53.882668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.239 [2024-11-20 11:34:53.882687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.239 [2024-11-20 11:34:53.893931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.239 [2024-11-20 11:34:53.893950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.239 [2024-11-20 11:34:53.906874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.239 [2024-11-20 11:34:53.906893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.239 [2024-11-20 11:34:53.918586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.239 [2024-11-20 11:34:53.918604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.239 [2024-11-20 11:34:53.930475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.239 [2024-11-20 11:34:53.930494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.239 [2024-11-20 11:34:53.943018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.239 [2024-11-20 11:34:53.943037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.239 [2024-11-20 11:34:53.953482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.239 [2024-11-20 11:34:53.953501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.239 [2024-11-20 11:34:53.959491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.239 [2024-11-20 11:34:53.959509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.239 [2024-11-20 11:34:53.968494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.239 [2024-11-20 11:34:53.968512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.504 [2024-11-20 11:34:53.981766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.504 [2024-11-20 11:34:53.981785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.504 [2024-11-20 11:34:53.988348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.504 [2024-11-20 11:34:53.988367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.504 [2024-11-20 11:34:54.001776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.504 [2024-11-20 11:34:54.001794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.504 [2024-11-20 11:34:54.008371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.504 [2024-11-20 11:34:54.008390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.504 [2024-11-20 11:34:54.021844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.504 [2024-11-20 11:34:54.021862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.504 [2024-11-20 11:34:54.034850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.504 [2024-11-20 11:34:54.034868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.504 [2024-11-20 11:34:54.045299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.504 [2024-11-20 11:34:54.045317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.504 [2024-11-20 11:34:54.058567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.504 [2024-11-20 11:34:54.058585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.504 [2024-11-20 11:34:54.069466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.504 [2024-11-20 11:34:54.069484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.504 [2024-11-20 11:34:54.075647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.504 [2024-11-20 11:34:54.075666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.504 [2024-11-20 11:34:54.084772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.504 [2024-11-20 11:34:54.084790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.504 [2024-11-20 11:34:54.098236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.504 [2024-11-20 11:34:54.098255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.505 [2024-11-20 11:34:54.110726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.505 [2024-11-20 11:34:54.110745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.505 [2024-11-20 11:34:54.121885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.505 [2024-11-20 11:34:54.121903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.505 [2024-11-20 11:34:54.134838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.505 [2024-11-20 11:34:54.134857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.505 [2024-11-20 11:34:54.146261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.505 [2024-11-20 11:34:54.146280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.505 [2024-11-20 11:34:54.158820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.505 [2024-11-20 11:34:54.158839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.505 [2024-11-20 11:34:54.169294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.505 [2024-11-20 11:34:54.169313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.505 18585.80 IOPS, 145.20 MiB/s [2024-11-20T10:34:54.247Z] [2024-11-20 11:34:54.181692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.505 [2024-11-20 11:34:54.181710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.505 00:34:01.505 Latency(us) 00:34:01.505 [2024-11-20T10:34:54.247Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:01.505 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:34:01.505 Nvme1n1 : 5.01 18587.43 145.21 0.00 0.00 6879.84 2471.25 11414.19 00:34:01.505 [2024-11-20T10:34:54.247Z] =================================================================================================================== 00:34:01.505 [2024-11-20T10:34:54.247Z] Total : 18587.43 145.21 0.00 0.00 6879.84 2471.25 11414.19 00:34:01.505 [2024-11-20 11:34:54.189551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.505 [2024-11-20 11:34:54.189568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.505 [2024-11-20 11:34:54.197553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.505 [2024-11-20 11:34:54.197569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.505 [2024-11-20 11:34:54.205559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.505 [2024-11-20 11:34:54.205578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.505 [2024-11-20 11:34:54.217564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.505 [2024-11-20 11:34:54.217589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.505 [2024-11-20 11:34:54.225559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.505 [2024-11-20 11:34:54.225576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.505 [2024-11-20 11:34:54.233556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.505 [2024-11-20 11:34:54.233574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.505 [2024-11-20 11:34:54.241553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.505 [2024-11-20 11:34:54.241570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.767 [2024-11-20 11:34:54.249553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.767 [2024-11-20 11:34:54.249570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.767 [2024-11-20 11:34:54.257552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.768 [2024-11-20 11:34:54.257568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.768 [2024-11-20 11:34:54.265552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.768 [2024-11-20 11:34:54.265568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.768 [2024-11-20 11:34:54.273551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.768 [2024-11-20 11:34:54.273567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.768 [2024-11-20 11:34:54.281552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:01.768 [2024-11-20 11:34:54.281566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:01.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2995245) - No such process 00:34:01.768 11:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2995245 00:34:01.768 11:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:01.768 11:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.768 11:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:01.768 11:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.768 11:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:01.768 11:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.768 11:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:01.768 delay0 00:34:01.768 11:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.768 11:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:34:01.768 11:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.768 11:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:01.768 11:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.768 11:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:34:01.768 [2024-11-20 11:34:54.480342] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:34:09.925 Initializing NVMe Controllers 00:34:09.925 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:09.925 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:09.925 Initialization complete. Launching workers. 00:34:09.925 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 6929 00:34:09.925 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 7216, failed to submit 33 00:34:09.925 success 7076, unsuccessful 140, failed 0 00:34:09.925 11:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:34:09.925 11:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:34:09.925 11:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:09.925 11:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:34:09.925 11:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:09.925 11:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:34:09.925 11:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:09.925 11:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:09.925 rmmod nvme_tcp 00:34:09.925 rmmod nvme_fabrics 00:34:09.925 rmmod nvme_keyring 00:34:09.925 11:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:09.925 11:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:34:09.925 11:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:34:09.925 11:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2992997 ']' 00:34:09.925 11:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2992997 00:34:09.925 11:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2992997 ']' 00:34:09.925 11:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2992997 00:34:09.925 11:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:34:09.925 11:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:09.925 11:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2992997 00:34:09.925 11:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:09.925 11:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:09.925 11:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2992997' 00:34:09.925 killing process with pid 2992997 00:34:09.925 11:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2992997 00:34:09.925 11:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2992997 00:34:09.925 11:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:09.925 11:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:09.925 11:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:09.926 11:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:34:09.926 11:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:34:09.926 11:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:09.926 11:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:34:09.926 11:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:09.926 11:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:09.926 11:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:09.926 11:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:09.926 11:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:11.311 11:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:11.311 00:34:11.311 real 0m34.065s 00:34:11.311 user 0m43.964s 00:34:11.311 sys 0m12.356s 00:34:11.311 11:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:11.311 11:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:11.311 ************************************ 00:34:11.311 END TEST nvmf_zcopy 00:34:11.311 ************************************ 00:34:11.311 11:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:11.311 11:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:11.311 11:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:11.311 11:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:11.311 ************************************ 00:34:11.311 START TEST nvmf_nmic 00:34:11.311 ************************************ 00:34:11.311 11:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:11.572 * Looking for test storage... 00:34:11.572 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:11.572 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:11.572 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:34:11.572 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:11.572 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:11.572 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:11.572 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:11.572 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:11.572 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:34:11.572 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:34:11.572 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:34:11.572 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:34:11.572 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:34:11.572 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:34:11.572 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:34:11.572 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:11.572 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:34:11.572 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:34:11.572 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:11.572 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:11.572 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:34:11.572 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:34:11.572 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:11.572 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:34:11.572 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:34:11.572 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:11.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:11.573 --rc genhtml_branch_coverage=1 00:34:11.573 --rc genhtml_function_coverage=1 00:34:11.573 --rc genhtml_legend=1 00:34:11.573 --rc geninfo_all_blocks=1 00:34:11.573 --rc geninfo_unexecuted_blocks=1 00:34:11.573 00:34:11.573 ' 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:11.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:11.573 --rc genhtml_branch_coverage=1 00:34:11.573 --rc genhtml_function_coverage=1 00:34:11.573 --rc genhtml_legend=1 00:34:11.573 --rc geninfo_all_blocks=1 00:34:11.573 --rc geninfo_unexecuted_blocks=1 00:34:11.573 00:34:11.573 ' 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:11.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:11.573 --rc genhtml_branch_coverage=1 00:34:11.573 --rc genhtml_function_coverage=1 00:34:11.573 --rc genhtml_legend=1 00:34:11.573 --rc geninfo_all_blocks=1 00:34:11.573 --rc geninfo_unexecuted_blocks=1 00:34:11.573 00:34:11.573 ' 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:11.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:11.573 --rc genhtml_branch_coverage=1 00:34:11.573 --rc genhtml_function_coverage=1 00:34:11.573 --rc genhtml_legend=1 00:34:11.573 --rc geninfo_all_blocks=1 00:34:11.573 --rc geninfo_unexecuted_blocks=1 00:34:11.573 00:34:11.573 ' 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:34:11.573 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:11.574 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:11.574 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:11.574 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:11.574 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:11.574 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:11.574 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:11.574 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:11.574 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:11.574 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:11.574 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:34:11.574 11:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:19.710 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:19.710 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:34:19.710 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:19.710 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:19.710 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:19.710 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:19.711 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:19.711 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:19.711 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:19.711 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:19.711 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:19.712 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:19.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:19.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.542 ms 00:34:19.712 00:34:19.712 --- 10.0.0.2 ping statistics --- 00:34:19.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:19.712 rtt min/avg/max/mdev = 0.542/0.542/0.542/0.000 ms 00:34:19.712 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:19.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:19.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:34:19.712 00:34:19.712 --- 10.0.0.1 ping statistics --- 00:34:19.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:19.712 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:34:19.712 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:19.712 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:34:19.712 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:19.712 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:19.712 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:19.712 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:19.712 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:19.712 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:19.712 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:19.712 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:34:19.712 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:19.712 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:19.712 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:19.712 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3001698 00:34:19.712 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3001698 00:34:19.712 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:19.712 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3001698 ']' 00:34:19.712 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:19.712 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:19.712 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:19.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:19.712 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:19.712 11:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:19.712 [2024-11-20 11:35:11.467787] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:19.712 [2024-11-20 11:35:11.468916] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:34:19.712 [2024-11-20 11:35:11.468968] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:19.712 [2024-11-20 11:35:11.571690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:19.712 [2024-11-20 11:35:11.626720] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:19.712 [2024-11-20 11:35:11.626771] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:19.712 [2024-11-20 11:35:11.626779] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:19.712 [2024-11-20 11:35:11.626791] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:19.712 [2024-11-20 11:35:11.626798] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:19.712 [2024-11-20 11:35:11.628777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:19.712 [2024-11-20 11:35:11.628915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:19.712 [2024-11-20 11:35:11.629042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:19.712 [2024-11-20 11:35:11.629043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:19.712 [2024-11-20 11:35:11.706132] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:19.712 [2024-11-20 11:35:11.706926] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:19.712 [2024-11-20 11:35:11.707777] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:19.712 [2024-11-20 11:35:11.707925] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:19.712 [2024-11-20 11:35:11.708050] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:19.712 [2024-11-20 11:35:12.323333] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:19.712 Malloc0 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:19.712 [2024-11-20 11:35:12.418413] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:34:19.712 test case1: single bdev can't be used in multiple subsystems 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.712 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:19.973 [2024-11-20 11:35:12.453690] bdev.c:8203:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:34:19.973 [2024-11-20 11:35:12.453718] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:34:19.973 [2024-11-20 11:35:12.453727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.973 request: 00:34:19.973 { 00:34:19.973 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:34:19.973 "namespace": { 00:34:19.973 "bdev_name": "Malloc0", 00:34:19.973 "no_auto_visible": false 00:34:19.973 }, 00:34:19.973 "method": "nvmf_subsystem_add_ns", 00:34:19.973 "req_id": 1 00:34:19.973 } 00:34:19.973 Got JSON-RPC error response 00:34:19.973 response: 00:34:19.973 { 00:34:19.973 "code": -32602, 00:34:19.973 "message": "Invalid parameters" 00:34:19.973 } 00:34:19.973 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:19.973 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:34:19.973 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:34:19.973 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:34:19.973 Adding namespace failed - expected result. 00:34:19.973 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:34:19.973 test case2: host connect to nvmf target in multiple paths 00:34:19.973 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:19.973 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.973 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:19.973 [2024-11-20 11:35:12.465835] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:19.973 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.973 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:20.232 11:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:34:20.802 11:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:34:20.802 11:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:34:20.802 11:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:20.802 11:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:34:20.802 11:35:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:34:22.715 11:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:22.715 11:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:22.715 11:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:22.715 11:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:34:22.715 11:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:22.715 11:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:34:22.715 11:35:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:22.715 [global] 00:34:22.715 thread=1 00:34:22.715 invalidate=1 00:34:22.715 rw=write 00:34:22.715 time_based=1 00:34:22.715 runtime=1 00:34:22.715 ioengine=libaio 00:34:22.715 direct=1 00:34:22.715 bs=4096 00:34:22.715 iodepth=1 00:34:22.715 norandommap=0 00:34:22.715 numjobs=1 00:34:22.715 00:34:22.715 verify_dump=1 00:34:22.715 verify_backlog=512 00:34:22.715 verify_state_save=0 00:34:22.715 do_verify=1 00:34:22.715 verify=crc32c-intel 00:34:22.715 [job0] 00:34:22.715 filename=/dev/nvme0n1 00:34:22.715 Could not set queue depth (nvme0n1) 00:34:23.286 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:23.286 fio-3.35 00:34:23.286 Starting 1 thread 00:34:24.228 00:34:24.228 job0: (groupid=0, jobs=1): err= 0: pid=3002731: Wed Nov 20 11:35:16 2024 00:34:24.228 read: IOPS=15, BW=62.9KiB/s (64.4kB/s)(64.0KiB/1017msec) 00:34:24.228 slat (nsec): min=26993, max=27593, avg=27208.63, stdev=155.11 00:34:24.228 clat (usec): min=40959, max=41996, avg=41786.40, stdev=380.32 00:34:24.228 lat (usec): min=40987, max=42023, avg=41813.61, stdev=380.30 00:34:24.228 clat percentiles (usec): 00:34:24.228 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:34:24.228 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:34:24.228 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:24.228 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:24.228 | 99.99th=[42206] 00:34:24.228 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:34:24.228 slat (usec): min=9, max=30709, avg=90.84, stdev=1355.87 00:34:24.228 clat (usec): min=298, max=824, avg=578.01, stdev=92.53 00:34:24.228 lat (usec): min=334, max=31399, avg=668.85, stdev=1364.24 00:34:24.228 clat percentiles (usec): 00:34:24.228 | 1.00th=[ 343], 5.00th=[ 416], 10.00th=[ 457], 20.00th=[ 502], 00:34:24.228 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 578], 60.00th=[ 603], 00:34:24.228 | 70.00th=[ 627], 80.00th=[ 660], 90.00th=[ 693], 95.00th=[ 725], 00:34:24.228 | 99.00th=[ 758], 99.50th=[ 791], 99.90th=[ 824], 99.95th=[ 824], 00:34:24.228 | 99.99th=[ 824] 00:34:24.228 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:34:24.228 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:24.228 lat (usec) : 500=18.94%, 750=76.52%, 1000=1.52% 00:34:24.228 lat (msec) : 50=3.03% 00:34:24.228 cpu : usr=1.28%, sys=1.67%, ctx=531, majf=0, minf=1 00:34:24.228 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:24.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.228 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.228 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:24.228 00:34:24.228 Run status group 0 (all jobs): 00:34:24.228 READ: bw=62.9KiB/s (64.4kB/s), 62.9KiB/s-62.9KiB/s (64.4kB/s-64.4kB/s), io=64.0KiB (65.5kB), run=1017-1017msec 00:34:24.228 WRITE: bw=2014KiB/s (2062kB/s), 2014KiB/s-2014KiB/s (2062kB/s-2062kB/s), io=2048KiB (2097kB), run=1017-1017msec 00:34:24.228 00:34:24.228 Disk stats (read/write): 00:34:24.228 nvme0n1: ios=38/512, merge=0/0, ticks=1509/227, in_queue=1736, util=98.70% 00:34:24.228 11:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:24.489 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:34:24.489 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:24.489 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:34:24.489 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:24.489 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:24.489 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:24.489 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:24.489 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:34:24.489 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:24.489 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:34:24.489 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:24.489 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:34:24.489 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:24.489 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:34:24.489 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:24.489 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:24.489 rmmod nvme_tcp 00:34:24.489 rmmod nvme_fabrics 00:34:24.489 rmmod nvme_keyring 00:34:24.489 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:24.489 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:34:24.489 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:34:24.489 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3001698 ']' 00:34:24.489 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3001698 00:34:24.489 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3001698 ']' 00:34:24.489 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3001698 00:34:24.489 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:34:24.489 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:24.489 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3001698 00:34:24.766 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:24.766 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:24.766 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3001698' 00:34:24.766 killing process with pid 3001698 00:34:24.766 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3001698 00:34:24.766 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3001698 00:34:24.766 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:24.766 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:24.766 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:24.766 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:34:24.766 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:34:24.766 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:24.766 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:34:24.766 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:24.766 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:24.766 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:24.766 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:24.766 11:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:26.731 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:26.731 00:34:26.731 real 0m15.499s 00:34:26.731 user 0m35.053s 00:34:26.732 sys 0m7.123s 00:34:26.732 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:26.732 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:26.732 ************************************ 00:34:26.732 END TEST nvmf_nmic 00:34:26.732 ************************************ 00:34:26.995 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:26.995 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:26.995 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:26.995 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:26.995 ************************************ 00:34:26.995 START TEST nvmf_fio_target 00:34:26.995 ************************************ 00:34:26.995 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:26.995 * Looking for test storage... 00:34:26.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:26.995 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:26.995 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:34:26.995 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:26.995 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:26.995 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:26.995 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:26.995 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:26.995 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:26.995 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:26.995 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:26.995 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:26.995 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:26.995 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:26.995 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:26.995 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:26.995 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:34:26.995 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:34:26.995 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:26.995 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:26.995 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:34:26.995 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:27.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.258 --rc genhtml_branch_coverage=1 00:34:27.258 --rc genhtml_function_coverage=1 00:34:27.258 --rc genhtml_legend=1 00:34:27.258 --rc geninfo_all_blocks=1 00:34:27.258 --rc geninfo_unexecuted_blocks=1 00:34:27.258 00:34:27.258 ' 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:27.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.258 --rc genhtml_branch_coverage=1 00:34:27.258 --rc genhtml_function_coverage=1 00:34:27.258 --rc genhtml_legend=1 00:34:27.258 --rc geninfo_all_blocks=1 00:34:27.258 --rc geninfo_unexecuted_blocks=1 00:34:27.258 00:34:27.258 ' 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:27.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.258 --rc genhtml_branch_coverage=1 00:34:27.258 --rc genhtml_function_coverage=1 00:34:27.258 --rc genhtml_legend=1 00:34:27.258 --rc geninfo_all_blocks=1 00:34:27.258 --rc geninfo_unexecuted_blocks=1 00:34:27.258 00:34:27.258 ' 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:27.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.258 --rc genhtml_branch_coverage=1 00:34:27.258 --rc genhtml_function_coverage=1 00:34:27.258 --rc genhtml_legend=1 00:34:27.258 --rc geninfo_all_blocks=1 00:34:27.258 --rc geninfo_unexecuted_blocks=1 00:34:27.258 00:34:27.258 ' 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:27.258 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:27.259 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:27.259 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:27.259 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:27.259 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:27.259 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:27.259 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:27.259 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:27.259 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:34:27.259 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:27.259 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:27.259 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:27.259 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:27.259 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:27.259 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:27.259 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:27.259 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:27.259 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:27.259 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:27.259 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:27.259 11:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:35.406 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:35.406 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:35.406 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:35.407 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:35.407 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:35.407 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:35.407 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:35.407 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:35.407 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:35.407 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:35.407 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:35.407 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:35.407 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:35.407 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:35.407 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:35.407 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:35.407 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:35.407 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:35.407 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:35.407 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:35.407 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:35.407 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:35.407 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:35.407 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:35.407 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:35.407 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:35.407 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:35.407 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:35.407 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:35.407 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:35.407 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:35.407 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:35.407 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:35.407 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:35.407 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:35.407 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:35.407 11:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:35.407 11:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:35.407 11:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:35.407 11:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:35.407 11:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:35.407 11:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:35.407 11:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:35.407 11:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:35.407 11:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:35.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:35.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:34:35.407 00:34:35.407 --- 10.0.0.2 ping statistics --- 00:34:35.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:35.407 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:34:35.407 11:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:35.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:35.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:34:35.407 00:34:35.407 --- 10.0.0.1 ping statistics --- 00:34:35.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:35.407 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:34:35.407 11:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:35.407 11:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:34:35.407 11:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:35.407 11:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:35.407 11:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:35.407 11:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:35.407 11:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:35.407 11:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:35.407 11:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:35.407 11:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:34:35.407 11:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:35.407 11:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:35.407 11:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:35.407 11:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3007239 00:34:35.407 11:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3007239 00:34:35.407 11:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:35.407 11:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3007239 ']' 00:34:35.407 11:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:35.407 11:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:35.407 11:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:35.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:35.407 11:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:35.407 11:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:35.407 [2024-11-20 11:35:27.357413] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:35.407 [2024-11-20 11:35:27.358524] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:34:35.407 [2024-11-20 11:35:27.358576] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:35.407 [2024-11-20 11:35:27.455974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:35.407 [2024-11-20 11:35:27.509778] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:35.407 [2024-11-20 11:35:27.509829] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:35.407 [2024-11-20 11:35:27.509837] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:35.407 [2024-11-20 11:35:27.509844] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:35.407 [2024-11-20 11:35:27.509851] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:35.407 [2024-11-20 11:35:27.511861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:35.407 [2024-11-20 11:35:27.512021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:35.407 [2024-11-20 11:35:27.512207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:35.407 [2024-11-20 11:35:27.512209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:35.407 [2024-11-20 11:35:27.589235] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:35.407 [2024-11-20 11:35:27.590411] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:35.407 [2024-11-20 11:35:27.590518] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:35.407 [2024-11-20 11:35:27.590809] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:35.407 [2024-11-20 11:35:27.590862] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:35.668 11:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:35.668 11:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:34:35.668 11:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:35.668 11:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:35.668 11:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:35.668 11:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:35.668 11:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:35.668 [2024-11-20 11:35:28.381258] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:35.929 11:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:35.929 11:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:34:35.929 11:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:36.190 11:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:34:36.190 11:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:36.451 11:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:34:36.451 11:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:36.711 11:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:34:36.711 11:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:34:36.711 11:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:36.970 11:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:34:36.971 11:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:37.231 11:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:34:37.231 11:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:37.492 11:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:34:37.492 11:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:34:37.492 11:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:37.754 11:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:37.754 11:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:38.016 11:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:38.016 11:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:38.278 11:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:38.278 [2024-11-20 11:35:30.953197] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:38.278 11:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:34:38.539 11:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:34:38.800 11:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:39.060 11:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:34:39.060 11:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:34:39.060 11:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:39.060 11:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:34:39.060 11:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:34:39.060 11:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:34:41.604 11:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:41.604 11:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:41.604 11:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:41.604 11:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:34:41.604 11:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:41.604 11:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:34:41.604 11:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:41.604 [global] 00:34:41.604 thread=1 00:34:41.604 invalidate=1 00:34:41.604 rw=write 00:34:41.604 time_based=1 00:34:41.604 runtime=1 00:34:41.604 ioengine=libaio 00:34:41.604 direct=1 00:34:41.604 bs=4096 00:34:41.604 iodepth=1 00:34:41.604 norandommap=0 00:34:41.604 numjobs=1 00:34:41.604 00:34:41.604 verify_dump=1 00:34:41.604 verify_backlog=512 00:34:41.604 verify_state_save=0 00:34:41.604 do_verify=1 00:34:41.604 verify=crc32c-intel 00:34:41.604 [job0] 00:34:41.604 filename=/dev/nvme0n1 00:34:41.604 [job1] 00:34:41.604 filename=/dev/nvme0n2 00:34:41.604 [job2] 00:34:41.604 filename=/dev/nvme0n3 00:34:41.604 [job3] 00:34:41.604 filename=/dev/nvme0n4 00:34:41.604 Could not set queue depth (nvme0n1) 00:34:41.604 Could not set queue depth (nvme0n2) 00:34:41.604 Could not set queue depth (nvme0n3) 00:34:41.604 Could not set queue depth (nvme0n4) 00:34:41.604 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:41.604 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:41.604 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:41.604 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:41.604 fio-3.35 00:34:41.604 Starting 4 threads 00:34:42.988 00:34:42.988 job0: (groupid=0, jobs=1): err= 0: pid=3008706: Wed Nov 20 11:35:35 2024 00:34:42.988 read: IOPS=16, BW=66.2KiB/s (67.8kB/s)(68.0KiB/1027msec) 00:34:42.988 slat (nsec): min=27469, max=28006, avg=27678.65, stdev=160.13 00:34:42.988 clat (usec): min=1196, max=42026, avg=39234.11, stdev=9812.72 00:34:42.988 lat (usec): min=1224, max=42054, avg=39261.79, stdev=9812.64 00:34:42.988 clat percentiles (usec): 00:34:42.988 | 1.00th=[ 1205], 5.00th=[ 1205], 10.00th=[41157], 20.00th=[41157], 00:34:42.988 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:34:42.988 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:42.988 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:42.988 | 99.99th=[42206] 00:34:42.988 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:34:42.988 slat (nsec): min=9651, max=92460, avg=32663.20, stdev=10249.90 00:34:42.988 clat (usec): min=281, max=1297, avg=659.93, stdev=148.94 00:34:42.988 lat (usec): min=291, max=1333, avg=692.59, stdev=152.54 00:34:42.988 clat percentiles (usec): 00:34:42.988 | 1.00th=[ 334], 5.00th=[ 404], 10.00th=[ 474], 20.00th=[ 537], 00:34:42.988 | 30.00th=[ 578], 40.00th=[ 611], 50.00th=[ 652], 60.00th=[ 701], 00:34:42.988 | 70.00th=[ 742], 80.00th=[ 791], 90.00th=[ 848], 95.00th=[ 906], 00:34:42.988 | 99.00th=[ 988], 99.50th=[ 1074], 99.90th=[ 1303], 99.95th=[ 1303], 00:34:42.988 | 99.99th=[ 1303] 00:34:42.988 bw ( KiB/s): min= 4096, max= 4096, per=51.35%, avg=4096.00, stdev= 0.00, samples=1 00:34:42.988 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:42.988 lat (usec) : 500=13.04%, 750=57.66%, 1000=25.33% 00:34:42.988 lat (msec) : 2=0.95%, 50=3.02% 00:34:42.988 cpu : usr=0.78%, sys=2.34%, ctx=531, majf=0, minf=1 00:34:42.988 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:42.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.988 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:42.988 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:42.988 job1: (groupid=0, jobs=1): err= 0: pid=3008725: Wed Nov 20 11:35:35 2024 00:34:42.988 read: IOPS=17, BW=71.1KiB/s (72.8kB/s)(72.0KiB/1013msec) 00:34:42.988 slat (nsec): min=9406, max=26818, avg=24585.39, stdev=5467.75 00:34:42.988 clat (usec): min=40849, max=41648, avg=41001.66, stdev=169.30 00:34:42.988 lat (usec): min=40876, max=41658, avg=41026.24, stdev=165.64 00:34:42.988 clat percentiles (usec): 00:34:42.988 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:42.988 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:42.988 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:34:42.988 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:34:42.988 | 99.99th=[41681] 00:34:42.988 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:34:42.988 slat (nsec): min=3424, max=53964, avg=14717.95, stdev=8518.75 00:34:42.988 clat (usec): min=129, max=983, avg=515.29, stdev=181.51 00:34:42.988 lat (usec): min=141, max=1003, avg=530.01, stdev=186.44 00:34:42.988 clat percentiles (usec): 00:34:42.988 | 1.00th=[ 194], 5.00th=[ 243], 10.00th=[ 297], 20.00th=[ 347], 00:34:42.988 | 30.00th=[ 404], 40.00th=[ 449], 50.00th=[ 486], 60.00th=[ 553], 00:34:42.988 | 70.00th=[ 603], 80.00th=[ 685], 90.00th=[ 791], 95.00th=[ 848], 00:34:42.988 | 99.00th=[ 922], 99.50th=[ 930], 99.90th=[ 988], 99.95th=[ 988], 00:34:42.988 | 99.99th=[ 988] 00:34:42.988 bw ( KiB/s): min= 4096, max= 4096, per=51.35%, avg=4096.00, stdev= 0.00, samples=1 00:34:42.988 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:42.988 lat (usec) : 250=5.66%, 500=45.09%, 750=32.83%, 1000=13.02% 00:34:42.988 lat (msec) : 50=3.40% 00:34:42.988 cpu : usr=0.49%, sys=0.59%, ctx=532, majf=0, minf=1 00:34:42.988 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:42.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.988 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:42.988 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:42.988 job2: (groupid=0, jobs=1): err= 0: pid=3008743: Wed Nov 20 11:35:35 2024 00:34:42.988 read: IOPS=15, BW=62.9KiB/s (64.4kB/s)(64.0KiB/1017msec) 00:34:42.988 slat (nsec): min=9294, max=27224, avg=24806.12, stdev=6024.88 00:34:42.988 clat (usec): min=40962, max=42023, avg=41891.84, stdev=251.31 00:34:42.988 lat (usec): min=40988, max=42050, avg=41916.64, stdev=251.25 00:34:42.988 clat percentiles (usec): 00:34:42.988 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:34:42.988 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:34:42.988 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:42.988 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:42.988 | 99.99th=[42206] 00:34:42.988 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:34:42.988 slat (nsec): min=3211, max=55385, avg=13201.75, stdev=6712.27 00:34:42.988 clat (usec): min=228, max=1041, avg=657.24, stdev=143.07 00:34:42.988 lat (usec): min=238, max=1053, avg=670.45, stdev=144.05 00:34:42.988 clat percentiles (usec): 00:34:42.988 | 1.00th=[ 293], 5.00th=[ 416], 10.00th=[ 465], 20.00th=[ 545], 00:34:42.988 | 30.00th=[ 578], 40.00th=[ 627], 50.00th=[ 668], 60.00th=[ 709], 00:34:42.988 | 70.00th=[ 742], 80.00th=[ 783], 90.00th=[ 824], 95.00th=[ 881], 00:34:42.988 | 99.00th=[ 963], 99.50th=[ 988], 99.90th=[ 1045], 99.95th=[ 1045], 00:34:42.988 | 99.99th=[ 1045] 00:34:42.988 bw ( KiB/s): min= 4096, max= 4096, per=51.35%, avg=4096.00, stdev= 0.00, samples=1 00:34:42.988 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:42.988 lat (usec) : 250=0.57%, 500=14.58%, 750=55.11%, 1000=26.52% 00:34:42.988 lat (msec) : 2=0.19%, 50=3.03% 00:34:42.988 cpu : usr=0.30%, sys=0.59%, ctx=529, majf=0, minf=1 00:34:42.988 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:42.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.988 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:42.988 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:42.988 job3: (groupid=0, jobs=1): err= 0: pid=3008749: Wed Nov 20 11:35:35 2024 00:34:42.988 read: IOPS=129, BW=517KiB/s (530kB/s)(528KiB/1021msec) 00:34:42.988 slat (nsec): min=9824, max=48896, avg=27271.48, stdev=5046.47 00:34:42.988 clat (usec): min=859, max=42011, avg=4781.82, stdev=11608.01 00:34:42.988 lat (usec): min=887, max=42038, avg=4809.09, stdev=11607.62 00:34:42.988 clat percentiles (usec): 00:34:42.988 | 1.00th=[ 873], 5.00th=[ 1020], 10.00th=[ 1057], 20.00th=[ 1090], 00:34:42.988 | 30.00th=[ 1106], 40.00th=[ 1123], 50.00th=[ 1139], 60.00th=[ 1156], 00:34:42.988 | 70.00th=[ 1172], 80.00th=[ 1188], 90.00th=[ 1270], 95.00th=[41157], 00:34:42.988 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:42.988 | 99.99th=[42206] 00:34:42.988 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:34:42.988 slat (usec): min=3, max=29917, avg=82.94, stdev=1322.27 00:34:42.989 clat (usec): min=276, max=1005, avg=662.13, stdev=133.45 00:34:42.989 lat (usec): min=289, max=30510, avg=745.07, stdev=1326.31 00:34:42.989 clat percentiles (usec): 00:34:42.989 | 1.00th=[ 359], 5.00th=[ 437], 10.00th=[ 482], 20.00th=[ 545], 00:34:42.989 | 30.00th=[ 594], 40.00th=[ 627], 50.00th=[ 676], 60.00th=[ 709], 00:34:42.989 | 70.00th=[ 742], 80.00th=[ 775], 90.00th=[ 824], 95.00th=[ 881], 00:34:42.989 | 99.00th=[ 947], 99.50th=[ 988], 99.90th=[ 1004], 99.95th=[ 1004], 00:34:42.989 | 99.99th=[ 1004] 00:34:42.989 bw ( KiB/s): min= 4096, max= 4096, per=51.35%, avg=4096.00, stdev= 0.00, samples=1 00:34:42.989 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:42.989 lat (usec) : 500=9.16%, 750=49.07%, 1000=21.89% 00:34:42.989 lat (msec) : 2=18.01%, 50=1.86% 00:34:42.989 cpu : usr=0.39%, sys=1.57%, ctx=649, majf=0, minf=1 00:34:42.989 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:42.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.989 issued rwts: total=132,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:42.989 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:42.989 00:34:42.989 Run status group 0 (all jobs): 00:34:42.989 READ: bw=713KiB/s (730kB/s), 62.9KiB/s-517KiB/s (64.4kB/s-530kB/s), io=732KiB (750kB), run=1013-1027msec 00:34:42.989 WRITE: bw=7977KiB/s (8168kB/s), 1994KiB/s-2022KiB/s (2042kB/s-2070kB/s), io=8192KiB (8389kB), run=1013-1027msec 00:34:42.989 00:34:42.989 Disk stats (read/write): 00:34:42.989 nvme0n1: ios=61/512, merge=0/0, ticks=663/252, in_queue=915, util=83.87% 00:34:42.989 nvme0n2: ios=63/512, merge=0/0, ticks=840/257, in_queue=1097, util=87.84% 00:34:42.989 nvme0n3: ios=33/512, merge=0/0, ticks=1341/327, in_queue=1668, util=91.96% 00:34:42.989 nvme0n4: ios=160/512, merge=0/0, ticks=899/324, in_queue=1223, util=96.47% 00:34:42.989 11:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:34:42.989 [global] 00:34:42.989 thread=1 00:34:42.989 invalidate=1 00:34:42.989 rw=randwrite 00:34:42.989 time_based=1 00:34:42.989 runtime=1 00:34:42.989 ioengine=libaio 00:34:42.989 direct=1 00:34:42.989 bs=4096 00:34:42.989 iodepth=1 00:34:42.989 norandommap=0 00:34:42.989 numjobs=1 00:34:42.989 00:34:42.989 verify_dump=1 00:34:42.989 verify_backlog=512 00:34:42.989 verify_state_save=0 00:34:42.989 do_verify=1 00:34:42.989 verify=crc32c-intel 00:34:42.989 [job0] 00:34:42.989 filename=/dev/nvme0n1 00:34:42.989 [job1] 00:34:42.989 filename=/dev/nvme0n2 00:34:42.989 [job2] 00:34:42.989 filename=/dev/nvme0n3 00:34:42.989 [job3] 00:34:42.989 filename=/dev/nvme0n4 00:34:42.989 Could not set queue depth (nvme0n1) 00:34:42.989 Could not set queue depth (nvme0n2) 00:34:42.989 Could not set queue depth (nvme0n3) 00:34:42.989 Could not set queue depth (nvme0n4) 00:34:43.250 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:43.250 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:43.250 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:43.250 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:43.250 fio-3.35 00:34:43.250 Starting 4 threads 00:34:44.633 00:34:44.633 job0: (groupid=0, jobs=1): err= 0: pid=3009163: Wed Nov 20 11:35:37 2024 00:34:44.633 read: IOPS=18, BW=75.2KiB/s (77.0kB/s)(76.0KiB/1011msec) 00:34:44.633 slat (nsec): min=25776, max=26366, avg=26065.58, stdev=174.58 00:34:44.633 clat (usec): min=40916, max=41524, avg=40994.72, stdev=132.74 00:34:44.633 lat (usec): min=40942, max=41550, avg=41020.78, stdev=132.76 00:34:44.633 clat percentiles (usec): 00:34:44.633 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:44.633 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:44.633 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:34:44.633 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:34:44.633 | 99.99th=[41681] 00:34:44.633 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:34:44.633 slat (nsec): min=9215, max=50832, avg=25200.56, stdev=10565.11 00:34:44.633 clat (usec): min=244, max=636, avg=419.29, stdev=74.40 00:34:44.633 lat (usec): min=254, max=649, avg=444.49, stdev=80.23 00:34:44.633 clat percentiles (usec): 00:34:44.633 | 1.00th=[ 269], 5.00th=[ 285], 10.00th=[ 318], 20.00th=[ 343], 00:34:44.633 | 30.00th=[ 363], 40.00th=[ 420], 50.00th=[ 445], 60.00th=[ 457], 00:34:44.633 | 70.00th=[ 469], 80.00th=[ 482], 90.00th=[ 498], 95.00th=[ 519], 00:34:44.633 | 99.00th=[ 562], 99.50th=[ 586], 99.90th=[ 635], 99.95th=[ 635], 00:34:44.633 | 99.99th=[ 635] 00:34:44.633 bw ( KiB/s): min= 4096, max= 4096, per=41.44%, avg=4096.00, stdev= 0.00, samples=1 00:34:44.633 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:44.633 lat (usec) : 250=0.38%, 500=86.82%, 750=9.23% 00:34:44.633 lat (msec) : 50=3.58% 00:34:44.633 cpu : usr=0.79%, sys=1.19%, ctx=531, majf=0, minf=1 00:34:44.634 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:44.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.634 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:44.634 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:44.634 job1: (groupid=0, jobs=1): err= 0: pid=3009179: Wed Nov 20 11:35:37 2024 00:34:44.634 read: IOPS=609, BW=2438KiB/s (2496kB/s)(2440KiB/1001msec) 00:34:44.634 slat (nsec): min=6720, max=68566, avg=24957.77, stdev=6328.34 00:34:44.634 clat (usec): min=337, max=40954, avg=878.01, stdev=2297.46 00:34:44.634 lat (usec): min=345, max=40980, avg=902.97, stdev=2297.56 00:34:44.634 clat percentiles (usec): 00:34:44.634 | 1.00th=[ 445], 5.00th=[ 523], 10.00th=[ 553], 20.00th=[ 635], 00:34:44.634 | 30.00th=[ 693], 40.00th=[ 725], 50.00th=[ 766], 60.00th=[ 799], 00:34:44.634 | 70.00th=[ 824], 80.00th=[ 848], 90.00th=[ 898], 95.00th=[ 930], 00:34:44.634 | 99.00th=[ 1020], 99.50th=[ 1074], 99.90th=[41157], 99.95th=[41157], 00:34:44.634 | 99.99th=[41157] 00:34:44.634 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:34:44.634 slat (nsec): min=9195, max=63211, avg=28922.53, stdev=8850.18 00:34:44.634 clat (usec): min=104, max=791, avg=397.72, stdev=121.20 00:34:44.634 lat (usec): min=114, max=823, avg=426.64, stdev=123.99 00:34:44.634 clat percentiles (usec): 00:34:44.634 | 1.00th=[ 135], 5.00th=[ 219], 10.00th=[ 243], 20.00th=[ 293], 00:34:44.634 | 30.00th=[ 318], 40.00th=[ 355], 50.00th=[ 396], 60.00th=[ 420], 00:34:44.634 | 70.00th=[ 469], 80.00th=[ 510], 90.00th=[ 562], 95.00th=[ 594], 00:34:44.634 | 99.00th=[ 693], 99.50th=[ 725], 99.90th=[ 766], 99.95th=[ 791], 00:34:44.634 | 99.99th=[ 791] 00:34:44.634 bw ( KiB/s): min= 4096, max= 4096, per=41.44%, avg=4096.00, stdev= 0.00, samples=1 00:34:44.634 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:44.634 lat (usec) : 250=6.67%, 500=42.17%, 750=30.97%, 1000=19.58% 00:34:44.634 lat (msec) : 2=0.49%, 50=0.12% 00:34:44.634 cpu : usr=2.50%, sys=4.60%, ctx=1634, majf=0, minf=1 00:34:44.634 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:44.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.634 issued rwts: total=610,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:44.634 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:44.634 job2: (groupid=0, jobs=1): err= 0: pid=3009195: Wed Nov 20 11:35:37 2024 00:34:44.634 read: IOPS=17, BW=69.5KiB/s (71.2kB/s)(72.0KiB/1036msec) 00:34:44.634 slat (nsec): min=25655, max=26732, avg=25982.67, stdev=264.68 00:34:44.634 clat (usec): min=1038, max=42026, avg=39339.04, stdev=9570.39 00:34:44.634 lat (usec): min=1064, max=42052, avg=39365.02, stdev=9570.39 00:34:44.634 clat percentiles (usec): 00:34:44.634 | 1.00th=[ 1037], 5.00th=[ 1037], 10.00th=[40633], 20.00th=[41157], 00:34:44.634 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:34:44.634 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:44.634 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:44.634 | 99.99th=[42206] 00:34:44.634 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:34:44.634 slat (nsec): min=9743, max=52451, avg=30633.81, stdev=6692.82 00:34:44.634 clat (usec): min=182, max=971, avg=599.35, stdev=132.65 00:34:44.634 lat (usec): min=214, max=1005, avg=629.99, stdev=134.10 00:34:44.634 clat percentiles (usec): 00:34:44.634 | 1.00th=[ 285], 5.00th=[ 379], 10.00th=[ 424], 20.00th=[ 486], 00:34:44.634 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 644], 00:34:44.634 | 70.00th=[ 676], 80.00th=[ 709], 90.00th=[ 758], 95.00th=[ 799], 00:34:44.634 | 99.00th=[ 889], 99.50th=[ 906], 99.90th=[ 971], 99.95th=[ 971], 00:34:44.634 | 99.99th=[ 971] 00:34:44.634 bw ( KiB/s): min= 4096, max= 4096, per=41.44%, avg=4096.00, stdev= 0.00, samples=1 00:34:44.634 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:44.634 lat (usec) : 250=0.38%, 500=22.08%, 750=62.45%, 1000=11.70% 00:34:44.634 lat (msec) : 2=0.19%, 50=3.21% 00:34:44.634 cpu : usr=1.26%, sys=1.06%, ctx=530, majf=0, minf=1 00:34:44.634 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:44.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.634 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:44.634 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:44.634 job3: (groupid=0, jobs=1): err= 0: pid=3009196: Wed Nov 20 11:35:37 2024 00:34:44.634 read: IOPS=16, BW=65.7KiB/s (67.3kB/s)(68.0KiB/1035msec) 00:34:44.634 slat (nsec): min=25209, max=25717, avg=25501.82, stdev=144.37 00:34:44.634 clat (usec): min=41076, max=42051, avg=41915.26, stdev=222.45 00:34:44.634 lat (usec): min=41102, max=42077, avg=41940.77, stdev=222.43 00:34:44.634 clat percentiles (usec): 00:34:44.634 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:34:44.634 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:34:44.634 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:44.634 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:44.634 | 99.99th=[42206] 00:34:44.634 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:34:44.634 slat (nsec): min=9444, max=52060, avg=27326.53, stdev=9256.49 00:34:44.634 clat (usec): min=296, max=885, avg=593.37, stdev=110.69 00:34:44.634 lat (usec): min=313, max=917, avg=620.70, stdev=114.42 00:34:44.634 clat percentiles (usec): 00:34:44.634 | 1.00th=[ 330], 5.00th=[ 375], 10.00th=[ 449], 20.00th=[ 490], 00:34:44.634 | 30.00th=[ 553], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 627], 00:34:44.634 | 70.00th=[ 668], 80.00th=[ 693], 90.00th=[ 725], 95.00th=[ 758], 00:34:44.634 | 99.00th=[ 807], 99.50th=[ 816], 99.90th=[ 889], 99.95th=[ 889], 00:34:44.634 | 99.99th=[ 889] 00:34:44.634 bw ( KiB/s): min= 4096, max= 4096, per=41.44%, avg=4096.00, stdev= 0.00, samples=1 00:34:44.634 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:44.634 lat (usec) : 500=20.60%, 750=70.70%, 1000=5.48% 00:34:44.634 lat (msec) : 50=3.21% 00:34:44.634 cpu : usr=0.77%, sys=1.35%, ctx=529, majf=0, minf=1 00:34:44.634 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:44.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:44.634 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:44.634 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:44.634 00:34:44.634 Run status group 0 (all jobs): 00:34:44.634 READ: bw=2564KiB/s (2625kB/s), 65.7KiB/s-2438KiB/s (67.3kB/s-2496kB/s), io=2656KiB (2720kB), run=1001-1036msec 00:34:44.634 WRITE: bw=9884KiB/s (10.1MB/s), 1977KiB/s-4092KiB/s (2024kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1036msec 00:34:44.634 00:34:44.634 Disk stats (read/write): 00:34:44.634 nvme0n1: ios=53/512, merge=0/0, ticks=700/210, in_queue=910, util=90.58% 00:34:44.634 nvme0n2: ios=533/794, merge=0/0, ticks=492/291, in_queue=783, util=86.44% 00:34:44.634 nvme0n3: ios=59/512, merge=0/0, ticks=604/292, in_queue=896, util=91.77% 00:34:44.634 nvme0n4: ios=12/512, merge=0/0, ticks=503/299, in_queue=802, util=89.43% 00:34:44.634 11:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:34:44.634 [global] 00:34:44.634 thread=1 00:34:44.634 invalidate=1 00:34:44.634 rw=write 00:34:44.634 time_based=1 00:34:44.634 runtime=1 00:34:44.634 ioengine=libaio 00:34:44.634 direct=1 00:34:44.634 bs=4096 00:34:44.634 iodepth=128 00:34:44.634 norandommap=0 00:34:44.634 numjobs=1 00:34:44.634 00:34:44.634 verify_dump=1 00:34:44.634 verify_backlog=512 00:34:44.634 verify_state_save=0 00:34:44.634 do_verify=1 00:34:44.634 verify=crc32c-intel 00:34:44.634 [job0] 00:34:44.634 filename=/dev/nvme0n1 00:34:44.634 [job1] 00:34:44.634 filename=/dev/nvme0n2 00:34:44.634 [job2] 00:34:44.634 filename=/dev/nvme0n3 00:34:44.634 [job3] 00:34:44.634 filename=/dev/nvme0n4 00:34:44.634 Could not set queue depth (nvme0n1) 00:34:44.634 Could not set queue depth (nvme0n2) 00:34:44.634 Could not set queue depth (nvme0n3) 00:34:44.634 Could not set queue depth (nvme0n4) 00:34:44.895 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:44.895 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:44.895 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:44.895 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:44.895 fio-3.35 00:34:44.895 Starting 4 threads 00:34:46.278 00:34:46.278 job0: (groupid=0, jobs=1): err= 0: pid=3009662: Wed Nov 20 11:35:38 2024 00:34:46.278 read: IOPS=6093, BW=23.8MiB/s (25.0MB/s)(23.9MiB/1004msec) 00:34:46.278 slat (nsec): min=944, max=10398k, avg=73890.26, stdev=567956.37 00:34:46.278 clat (usec): min=2534, max=29677, avg=9960.60, stdev=3886.62 00:34:46.278 lat (usec): min=2540, max=29685, avg=10034.49, stdev=3921.73 00:34:46.278 clat percentiles (usec): 00:34:46.278 | 1.00th=[ 4080], 5.00th=[ 5669], 10.00th=[ 6456], 20.00th=[ 7308], 00:34:46.278 | 30.00th=[ 7635], 40.00th=[ 8029], 50.00th=[ 8586], 60.00th=[ 9634], 00:34:46.278 | 70.00th=[10683], 80.00th=[12518], 90.00th=[15926], 95.00th=[18220], 00:34:46.278 | 99.00th=[21627], 99.50th=[22414], 99.90th=[27919], 99.95th=[27919], 00:34:46.278 | 99.99th=[29754] 00:34:46.278 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:34:46.278 slat (nsec): min=1624, max=11475k, avg=82602.26, stdev=531014.49 00:34:46.278 clat (usec): min=1218, max=58281, avg=10815.48, stdev=7996.08 00:34:46.278 lat (usec): min=1230, max=58296, avg=10898.08, stdev=8048.21 00:34:46.278 clat percentiles (usec): 00:34:46.278 | 1.00th=[ 3326], 5.00th=[ 4686], 10.00th=[ 5080], 20.00th=[ 6063], 00:34:46.278 | 30.00th=[ 6849], 40.00th=[ 7242], 50.00th=[ 8356], 60.00th=[ 9896], 00:34:46.278 | 70.00th=[12387], 80.00th=[13829], 90.00th=[14877], 95.00th=[25560], 00:34:46.278 | 99.00th=[47449], 99.50th=[56361], 99.90th=[58459], 99.95th=[58459], 00:34:46.278 | 99.99th=[58459] 00:34:46.278 bw ( KiB/s): min=23352, max=25800, per=26.30%, avg=24576.00, stdev=1731.00, samples=2 00:34:46.278 iops : min= 5838, max= 6450, avg=6144.00, stdev=432.75, samples=2 00:34:46.278 lat (msec) : 2=0.13%, 4=1.52%, 10=61.21%, 20=32.74%, 50=3.96% 00:34:46.278 lat (msec) : 100=0.45% 00:34:46.278 cpu : usr=4.39%, sys=6.28%, ctx=535, majf=0, minf=1 00:34:46.278 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:46.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:46.278 issued rwts: total=6118,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:46.278 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:46.278 job1: (groupid=0, jobs=1): err= 0: pid=3009676: Wed Nov 20 11:35:38 2024 00:34:46.278 read: IOPS=2727, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1005msec) 00:34:46.278 slat (nsec): min=1043, max=16904k, avg=133965.03, stdev=859717.93 00:34:46.278 clat (usec): min=1228, max=52398, avg=16396.55, stdev=8532.49 00:34:46.278 lat (usec): min=4364, max=52406, avg=16530.51, stdev=8578.74 00:34:46.278 clat percentiles (usec): 00:34:46.278 | 1.00th=[ 4555], 5.00th=[ 6718], 10.00th=[ 7832], 20.00th=[ 9110], 00:34:46.278 | 30.00th=[13042], 40.00th=[13960], 50.00th=[14746], 60.00th=[16188], 00:34:46.278 | 70.00th=[17957], 80.00th=[19530], 90.00th=[28181], 95.00th=[35390], 00:34:46.278 | 99.00th=[52167], 99.50th=[52167], 99.90th=[52167], 99.95th=[52167], 00:34:46.278 | 99.99th=[52167] 00:34:46.278 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:34:46.278 slat (nsec): min=1553, max=21792k, avg=200457.28, stdev=1295758.22 00:34:46.278 clat (usec): min=6875, max=65317, avg=26575.79, stdev=16383.32 00:34:46.278 lat (usec): min=6882, max=65326, avg=26776.24, stdev=16453.78 00:34:46.278 clat percentiles (usec): 00:34:46.278 | 1.00th=[ 7308], 5.00th=[ 7767], 10.00th=[ 8029], 20.00th=[11469], 00:34:46.278 | 30.00th=[14353], 40.00th=[15008], 50.00th=[24249], 60.00th=[31851], 00:34:46.278 | 70.00th=[34866], 80.00th=[38011], 90.00th=[55313], 95.00th=[59507], 00:34:46.278 | 99.00th=[65274], 99.50th=[65274], 99.90th=[65274], 99.95th=[65274], 00:34:46.278 | 99.99th=[65274] 00:34:46.278 bw ( KiB/s): min= 9208, max=15368, per=13.15%, avg=12288.00, stdev=4355.78, samples=2 00:34:46.278 iops : min= 2302, max= 3842, avg=3072.00, stdev=1088.94, samples=2 00:34:46.278 lat (msec) : 2=0.02%, 10=21.95%, 20=40.46%, 50=30.64%, 100=6.93% 00:34:46.278 cpu : usr=2.19%, sys=3.69%, ctx=231, majf=0, minf=2 00:34:46.278 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:34:46.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:46.278 issued rwts: total=2741,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:46.278 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:46.278 job2: (groupid=0, jobs=1): err= 0: pid=3009693: Wed Nov 20 11:35:38 2024 00:34:46.278 read: IOPS=6921, BW=27.0MiB/s (28.3MB/s)(27.3MiB/1008msec) 00:34:46.278 slat (nsec): min=914, max=19570k, avg=76795.09, stdev=582831.96 00:34:46.278 clat (usec): min=1128, max=62043, avg=10153.81, stdev=7843.54 00:34:46.278 lat (usec): min=1862, max=62047, avg=10230.61, stdev=7889.64 00:34:46.278 clat percentiles (usec): 00:34:46.278 | 1.00th=[ 3818], 5.00th=[ 5276], 10.00th=[ 5866], 20.00th=[ 6980], 00:34:46.278 | 30.00th=[ 7504], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 8848], 00:34:46.278 | 70.00th=[ 9634], 80.00th=[10421], 90.00th=[13960], 95.00th=[17695], 00:34:46.278 | 99.00th=[54264], 99.50th=[57934], 99.90th=[62129], 99.95th=[62129], 00:34:46.278 | 99.99th=[62129] 00:34:46.278 write: IOPS=7111, BW=27.8MiB/s (29.1MB/s)(28.0MiB/1008msec); 0 zone resets 00:34:46.278 slat (nsec): min=1574, max=6132.9k, avg=55087.87, stdev=299474.30 00:34:46.278 clat (usec): min=1049, max=19570, avg=7957.45, stdev=2707.03 00:34:46.278 lat (usec): min=1058, max=19578, avg=8012.54, stdev=2720.41 00:34:46.278 clat percentiles (usec): 00:34:46.278 | 1.00th=[ 2474], 5.00th=[ 4293], 10.00th=[ 5080], 20.00th=[ 6325], 00:34:46.278 | 30.00th=[ 6652], 40.00th=[ 7046], 50.00th=[ 7635], 60.00th=[ 8094], 00:34:46.278 | 70.00th=[ 8455], 80.00th=[ 9110], 90.00th=[11076], 95.00th=[14484], 00:34:46.278 | 99.00th=[16188], 99.50th=[16909], 99.90th=[19530], 99.95th=[19530], 00:34:46.278 | 99.99th=[19530] 00:34:46.278 bw ( KiB/s): min=24576, max=32768, per=30.68%, avg=28672.00, stdev=5792.62, samples=2 00:34:46.278 iops : min= 6144, max= 8192, avg=7168.00, stdev=1448.15, samples=2 00:34:46.278 lat (msec) : 2=0.51%, 4=2.23%, 10=77.77%, 20=17.53%, 50=1.08% 00:34:46.278 lat (msec) : 100=0.88% 00:34:46.278 cpu : usr=5.16%, sys=6.65%, ctx=727, majf=0, minf=1 00:34:46.278 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:34:46.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:46.278 issued rwts: total=6977,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:46.278 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:46.279 job3: (groupid=0, jobs=1): err= 0: pid=3009700: Wed Nov 20 11:35:38 2024 00:34:46.279 read: IOPS=6619, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1007msec) 00:34:46.279 slat (nsec): min=927, max=10707k, avg=63381.76, stdev=522285.23 00:34:46.279 clat (usec): min=1762, max=25337, avg=9059.50, stdev=2715.23 00:34:46.279 lat (usec): min=1767, max=25363, avg=9122.88, stdev=2751.72 00:34:46.279 clat percentiles (usec): 00:34:46.279 | 1.00th=[ 3556], 5.00th=[ 4817], 10.00th=[ 5866], 20.00th=[ 7177], 00:34:46.279 | 30.00th=[ 7701], 40.00th=[ 8094], 50.00th=[ 8586], 60.00th=[ 9241], 00:34:46.279 | 70.00th=[10159], 80.00th=[11338], 90.00th=[12387], 95.00th=[14222], 00:34:46.279 | 99.00th=[17433], 99.50th=[17957], 99.90th=[19268], 99.95th=[19268], 00:34:46.279 | 99.99th=[25297] 00:34:46.279 write: IOPS=7118, BW=27.8MiB/s (29.2MB/s)(28.0MiB/1007msec); 0 zone resets 00:34:46.279 slat (nsec): min=1595, max=10159k, avg=63454.00, stdev=491825.96 00:34:46.279 clat (usec): min=613, max=68252, avg=9402.53, stdev=9986.31 00:34:46.279 lat (usec): min=652, max=68261, avg=9465.99, stdev=10051.63 00:34:46.279 clat percentiles (usec): 00:34:46.279 | 1.00th=[ 1418], 5.00th=[ 3752], 10.00th=[ 4686], 20.00th=[ 5604], 00:34:46.279 | 30.00th=[ 6325], 40.00th=[ 7111], 50.00th=[ 7635], 60.00th=[ 7898], 00:34:46.279 | 70.00th=[ 8356], 80.00th=[ 9241], 90.00th=[11863], 95.00th=[16319], 00:34:46.279 | 99.00th=[64750], 99.50th=[67634], 99.90th=[67634], 99.95th=[68682], 00:34:46.279 | 99.99th=[68682] 00:34:46.279 bw ( KiB/s): min=27984, max=28424, per=30.18%, avg=28204.00, stdev=311.13, samples=2 00:34:46.279 iops : min= 6996, max= 7106, avg=7051.00, stdev=77.78, samples=2 00:34:46.279 lat (usec) : 750=0.01%, 1000=0.04% 00:34:46.279 lat (msec) : 2=0.85%, 4=3.89%, 10=70.83%, 20=22.29%, 50=0.67% 00:34:46.279 lat (msec) : 100=1.42% 00:34:46.279 cpu : usr=4.57%, sys=8.45%, ctx=415, majf=0, minf=1 00:34:46.279 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:34:46.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:46.279 issued rwts: total=6666,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:46.279 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:46.279 00:34:46.279 Run status group 0 (all jobs): 00:34:46.279 READ: bw=87.2MiB/s (91.4MB/s), 10.7MiB/s-27.0MiB/s (11.2MB/s-28.3MB/s), io=87.9MiB (92.2MB), run=1004-1008msec 00:34:46.279 WRITE: bw=91.3MiB/s (95.7MB/s), 11.9MiB/s-27.8MiB/s (12.5MB/s-29.2MB/s), io=92.0MiB (96.5MB), run=1004-1008msec 00:34:46.279 00:34:46.279 Disk stats (read/write): 00:34:46.279 nvme0n1: ios=4620/4608, merge=0/0, ticks=47103/54462, in_queue=101565, util=91.68% 00:34:46.279 nvme0n2: ios=1841/2048, merge=0/0, ticks=9351/16726, in_queue=26077, util=95.92% 00:34:46.279 nvme0n3: ios=6606/6656, merge=0/0, ticks=26018/22593, in_queue=48611, util=91.22% 00:34:46.279 nvme0n4: ios=5573/5632, merge=0/0, ticks=45481/52039, in_queue=97520, util=88.87% 00:34:46.279 11:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:34:46.279 [global] 00:34:46.279 thread=1 00:34:46.279 invalidate=1 00:34:46.279 rw=randwrite 00:34:46.279 time_based=1 00:34:46.279 runtime=1 00:34:46.279 ioengine=libaio 00:34:46.279 direct=1 00:34:46.279 bs=4096 00:34:46.279 iodepth=128 00:34:46.279 norandommap=0 00:34:46.279 numjobs=1 00:34:46.279 00:34:46.279 verify_dump=1 00:34:46.279 verify_backlog=512 00:34:46.279 verify_state_save=0 00:34:46.279 do_verify=1 00:34:46.279 verify=crc32c-intel 00:34:46.279 [job0] 00:34:46.279 filename=/dev/nvme0n1 00:34:46.279 [job1] 00:34:46.279 filename=/dev/nvme0n2 00:34:46.279 [job2] 00:34:46.279 filename=/dev/nvme0n3 00:34:46.279 [job3] 00:34:46.279 filename=/dev/nvme0n4 00:34:46.279 Could not set queue depth (nvme0n1) 00:34:46.279 Could not set queue depth (nvme0n2) 00:34:46.279 Could not set queue depth (nvme0n3) 00:34:46.279 Could not set queue depth (nvme0n4) 00:34:46.540 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:46.540 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:46.540 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:46.540 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:46.540 fio-3.35 00:34:46.540 Starting 4 threads 00:34:47.927 00:34:47.927 job0: (groupid=0, jobs=1): err= 0: pid=3010090: Wed Nov 20 11:35:40 2024 00:34:47.927 read: IOPS=5378, BW=21.0MiB/s (22.0MB/s)(21.2MiB/1008msec) 00:34:47.927 slat (nsec): min=882, max=18433k, avg=93939.20, stdev=689017.03 00:34:47.927 clat (usec): min=936, max=62462, avg=12859.15, stdev=8313.96 00:34:47.927 lat (usec): min=2469, max=62466, avg=12953.09, stdev=8356.48 00:34:47.927 clat percentiles (usec): 00:34:47.927 | 1.00th=[ 3556], 5.00th=[ 4686], 10.00th=[ 5735], 20.00th=[ 6652], 00:34:47.927 | 30.00th=[ 8094], 40.00th=[ 9503], 50.00th=[10421], 60.00th=[12387], 00:34:47.927 | 70.00th=[14615], 80.00th=[17171], 90.00th=[21365], 95.00th=[26608], 00:34:47.927 | 99.00th=[50594], 99.50th=[55313], 99.90th=[56361], 99.95th=[62653], 00:34:47.927 | 99.99th=[62653] 00:34:47.927 write: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec); 0 zone resets 00:34:47.927 slat (nsec): min=1521, max=8526.5k, avg=77004.78, stdev=476169.31 00:34:47.927 clat (usec): min=751, max=39697, avg=10334.43, stdev=6735.45 00:34:47.927 lat (usec): min=763, max=39704, avg=10411.43, stdev=6785.58 00:34:47.927 clat percentiles (usec): 00:34:47.927 | 1.00th=[ 1237], 5.00th=[ 3556], 10.00th=[ 5538], 20.00th=[ 5932], 00:34:47.927 | 30.00th=[ 6259], 40.00th=[ 7111], 50.00th=[ 7767], 60.00th=[ 8586], 00:34:47.927 | 70.00th=[10945], 80.00th=[14091], 90.00th=[21627], 95.00th=[26346], 00:34:47.927 | 99.00th=[30540], 99.50th=[36439], 99.90th=[39584], 99.95th=[39584], 00:34:47.927 | 99.99th=[39584] 00:34:47.927 bw ( KiB/s): min=17368, max=27688, per=25.06%, avg=22528.00, stdev=7297.34, samples=2 00:34:47.927 iops : min= 4342, max= 6922, avg=5632.00, stdev=1824.34, samples=2 00:34:47.927 lat (usec) : 1000=0.26% 00:34:47.927 lat (msec) : 2=1.08%, 4=2.75%, 10=52.76%, 20=30.58%, 50=12.05% 00:34:47.927 lat (msec) : 100=0.52% 00:34:47.927 cpu : usr=3.28%, sys=5.26%, ctx=535, majf=0, minf=1 00:34:47.927 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:34:47.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:47.927 issued rwts: total=5422,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:47.927 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:47.927 job1: (groupid=0, jobs=1): err= 0: pid=3010096: Wed Nov 20 11:35:40 2024 00:34:47.927 read: IOPS=5539, BW=21.6MiB/s (22.7MB/s)(21.7MiB/1002msec) 00:34:47.927 slat (nsec): min=911, max=15866k, avg=88327.50, stdev=635903.59 00:34:47.927 clat (usec): min=919, max=43207, avg=11264.48, stdev=5758.43 00:34:47.927 lat (usec): min=2776, max=43213, avg=11352.80, stdev=5803.88 00:34:47.927 clat percentiles (usec): 00:34:47.927 | 1.00th=[ 3752], 5.00th=[ 5604], 10.00th=[ 6718], 20.00th=[ 7439], 00:34:47.927 | 30.00th=[ 7963], 40.00th=[ 8586], 50.00th=[ 9372], 60.00th=[10945], 00:34:47.927 | 70.00th=[12518], 80.00th=[14222], 90.00th=[17695], 95.00th=[21627], 00:34:47.927 | 99.00th=[35390], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:34:47.927 | 99.99th=[43254] 00:34:47.927 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:34:47.927 slat (nsec): min=1481, max=17433k, avg=84386.00, stdev=660147.14 00:34:47.927 clat (usec): min=1087, max=35903, avg=11419.11, stdev=5652.73 00:34:47.927 lat (usec): min=1119, max=35933, avg=11503.50, stdev=5708.11 00:34:47.927 clat percentiles (usec): 00:34:47.927 | 1.00th=[ 2507], 5.00th=[ 5080], 10.00th=[ 6259], 20.00th=[ 7177], 00:34:47.927 | 30.00th=[ 7701], 40.00th=[ 8160], 50.00th=[ 9110], 60.00th=[10814], 00:34:47.927 | 70.00th=[13698], 80.00th=[16909], 90.00th=[19530], 95.00th=[21365], 00:34:47.927 | 99.00th=[28443], 99.50th=[28705], 99.90th=[28705], 99.95th=[31851], 00:34:47.927 | 99.99th=[35914] 00:34:47.927 bw ( KiB/s): min=16384, max=28672, per=25.06%, avg=22528.00, stdev=8688.93, samples=2 00:34:47.927 iops : min= 4096, max= 7168, avg=5632.00, stdev=2172.23, samples=2 00:34:47.927 lat (usec) : 1000=0.01% 00:34:47.927 lat (msec) : 2=0.27%, 4=1.39%, 10=53.89%, 20=36.81%, 50=7.63% 00:34:47.927 cpu : usr=3.30%, sys=5.00%, ctx=433, majf=0, minf=1 00:34:47.927 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:34:47.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:47.927 issued rwts: total=5551,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:47.927 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:47.927 job2: (groupid=0, jobs=1): err= 0: pid=3010104: Wed Nov 20 11:35:40 2024 00:34:47.927 read: IOPS=5648, BW=22.1MiB/s (23.1MB/s)(22.2MiB/1005msec) 00:34:47.927 slat (nsec): min=927, max=15037k, avg=87599.41, stdev=621353.34 00:34:47.927 clat (usec): min=2674, max=39903, avg=10751.34, stdev=5829.15 00:34:47.927 lat (usec): min=3242, max=39908, avg=10838.94, stdev=5877.77 00:34:47.927 clat percentiles (usec): 00:34:47.927 | 1.00th=[ 4883], 5.00th=[ 5997], 10.00th=[ 6652], 20.00th=[ 7439], 00:34:47.927 | 30.00th=[ 8094], 40.00th=[ 8356], 50.00th=[ 8848], 60.00th=[ 9503], 00:34:47.927 | 70.00th=[10159], 80.00th=[11863], 90.00th=[19268], 95.00th=[26084], 00:34:47.927 | 99.00th=[35390], 99.50th=[40109], 99.90th=[40109], 99.95th=[40109], 00:34:47.927 | 99.99th=[40109] 00:34:47.927 write: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec); 0 zone resets 00:34:47.927 slat (nsec): min=1533, max=7280.8k, avg=75241.42, stdev=472436.72 00:34:47.927 clat (usec): min=1157, max=39700, avg=10817.10, stdev=6034.90 00:34:47.927 lat (usec): min=1170, max=39702, avg=10892.34, stdev=6058.54 00:34:47.927 clat percentiles (usec): 00:34:47.927 | 1.00th=[ 2409], 5.00th=[ 5866], 10.00th=[ 6980], 20.00th=[ 7504], 00:34:47.927 | 30.00th=[ 8094], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9241], 00:34:47.927 | 70.00th=[10028], 80.00th=[12518], 90.00th=[16450], 95.00th=[25822], 00:34:47.927 | 99.00th=[36439], 99.50th=[37487], 99.90th=[39060], 99.95th=[39060], 00:34:47.927 | 99.99th=[39584] 00:34:47.927 bw ( KiB/s): min=20480, max=28008, per=26.97%, avg=24244.00, stdev=5323.10, samples=2 00:34:47.927 iops : min= 5120, max= 7002, avg=6061.00, stdev=1330.77, samples=2 00:34:47.927 lat (msec) : 2=0.42%, 4=0.47%, 10=68.18%, 20=22.71%, 50=8.22% 00:34:47.927 cpu : usr=3.88%, sys=6.08%, ctx=402, majf=0, minf=2 00:34:47.927 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:47.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:47.927 issued rwts: total=5677,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:47.927 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:47.927 job3: (groupid=0, jobs=1): err= 0: pid=3010111: Wed Nov 20 11:35:40 2024 00:34:47.927 read: IOPS=5460, BW=21.3MiB/s (22.4MB/s)(22.4MiB/1048msec) 00:34:47.927 slat (nsec): min=946, max=8793.1k, avg=76031.48, stdev=571199.87 00:34:47.927 clat (usec): min=1538, max=49946, avg=11581.21, stdev=6391.37 00:34:47.928 lat (usec): min=1568, max=49956, avg=11657.24, stdev=6415.35 00:34:47.928 clat percentiles (usec): 00:34:47.928 | 1.00th=[ 2311], 5.00th=[ 6783], 10.00th=[ 7308], 20.00th=[ 8225], 00:34:47.928 | 30.00th=[ 8848], 40.00th=[ 9372], 50.00th=[10552], 60.00th=[11076], 00:34:47.928 | 70.00th=[11731], 80.00th=[12649], 90.00th=[16909], 95.00th=[22152], 00:34:47.928 | 99.00th=[49546], 99.50th=[49546], 99.90th=[50070], 99.95th=[50070], 00:34:47.928 | 99.99th=[50070] 00:34:47.928 write: IOPS=5862, BW=22.9MiB/s (24.0MB/s)(24.0MiB/1048msec); 0 zone resets 00:34:47.928 slat (nsec): min=1588, max=9627.6k, avg=73861.21, stdev=489300.59 00:34:47.928 clat (usec): min=1048, max=61876, avg=10835.33, stdev=7319.06 00:34:47.928 lat (usec): min=1057, max=61880, avg=10909.19, stdev=7356.42 00:34:47.928 clat percentiles (usec): 00:34:47.928 | 1.00th=[ 3228], 5.00th=[ 4948], 10.00th=[ 5604], 20.00th=[ 7177], 00:34:47.928 | 30.00th=[ 7963], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9765], 00:34:47.928 | 70.00th=[10290], 80.00th=[11731], 90.00th=[19268], 95.00th=[24249], 00:34:47.928 | 99.00th=[50070], 99.50th=[51643], 99.90th=[58459], 99.95th=[58459], 00:34:47.928 | 99.99th=[62129] 00:34:47.928 bw ( KiB/s): min=24168, max=24688, per=27.17%, avg=24428.00, stdev=367.70, samples=2 00:34:47.928 iops : min= 6042, max= 6172, avg=6107.00, stdev=91.92, samples=2 00:34:47.928 lat (msec) : 2=0.41%, 4=1.95%, 10=52.69%, 20=37.22%, 50=7.13% 00:34:47.928 lat (msec) : 100=0.60% 00:34:47.928 cpu : usr=4.49%, sys=5.83%, ctx=403, majf=0, minf=1 00:34:47.928 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:47.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.928 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:47.928 issued rwts: total=5723,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:47.928 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:47.928 00:34:47.928 Run status group 0 (all jobs): 00:34:47.928 READ: bw=83.4MiB/s (87.4MB/s), 21.0MiB/s-22.1MiB/s (22.0MB/s-23.1MB/s), io=87.4MiB (91.6MB), run=1002-1048msec 00:34:47.928 WRITE: bw=87.8MiB/s (92.1MB/s), 21.8MiB/s-23.9MiB/s (22.9MB/s-25.0MB/s), io=92.0MiB (96.5MB), run=1002-1048msec 00:34:47.928 00:34:47.928 Disk stats (read/write): 00:34:47.928 nvme0n1: ios=5070/5120, merge=0/0, ticks=24189/21018, in_queue=45207, util=87.68% 00:34:47.928 nvme0n2: ios=4131/4488, merge=0/0, ticks=25733/27470, in_queue=53203, util=87.65% 00:34:47.928 nvme0n3: ios=4608/4644, merge=0/0, ticks=22050/23079, in_queue=45129, util=88.37% 00:34:47.928 nvme0n4: ios=4643/5083, merge=0/0, ticks=36391/43337, in_queue=79728, util=99.36% 00:34:47.928 11:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:34:47.928 11:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3010401 00:34:47.928 11:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:34:47.928 11:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:34:47.928 [global] 00:34:47.928 thread=1 00:34:47.928 invalidate=1 00:34:47.928 rw=read 00:34:47.928 time_based=1 00:34:47.928 runtime=10 00:34:47.928 ioengine=libaio 00:34:47.928 direct=1 00:34:47.928 bs=4096 00:34:47.928 iodepth=1 00:34:47.928 norandommap=1 00:34:47.928 numjobs=1 00:34:47.928 00:34:47.928 [job0] 00:34:47.928 filename=/dev/nvme0n1 00:34:47.928 [job1] 00:34:47.928 filename=/dev/nvme0n2 00:34:47.928 [job2] 00:34:47.928 filename=/dev/nvme0n3 00:34:47.928 [job3] 00:34:47.928 filename=/dev/nvme0n4 00:34:47.928 Could not set queue depth (nvme0n1) 00:34:47.928 Could not set queue depth (nvme0n2) 00:34:47.928 Could not set queue depth (nvme0n3) 00:34:47.928 Could not set queue depth (nvme0n4) 00:34:48.496 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:48.496 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:48.496 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:48.496 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:48.496 fio-3.35 00:34:48.496 Starting 4 threads 00:34:51.037 11:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:51.037 11:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:51.037 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=270336, buflen=4096 00:34:51.037 fio: pid=3010618, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:51.297 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=10346496, buflen=4096 00:34:51.297 fio: pid=3010614, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:51.297 11:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:51.297 11:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:51.557 11:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:51.557 11:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:51.557 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=487424, buflen=4096 00:34:51.557 fio: pid=3010603, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:51.557 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=10997760, buflen=4096 00:34:51.557 fio: pid=3010610, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:51.557 11:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:51.557 11:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:51.820 00:34:51.820 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3010603: Wed Nov 20 11:35:44 2024 00:34:51.820 read: IOPS=40, BW=160KiB/s (163kB/s)(476KiB/2984msec) 00:34:51.820 slat (usec): min=8, max=22683, avg=304.89, stdev=2277.11 00:34:51.820 clat (usec): min=634, max=42018, avg=24579.69, stdev=19789.57 00:34:51.820 lat (usec): min=662, max=64043, avg=24886.91, stdev=20156.95 00:34:51.820 clat percentiles (usec): 00:34:51.820 | 1.00th=[ 709], 5.00th=[ 816], 10.00th=[ 930], 20.00th=[ 971], 00:34:51.820 | 30.00th=[ 1057], 40.00th=[ 1532], 50.00th=[40633], 60.00th=[41157], 00:34:51.820 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:34:51.820 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:51.820 | 99.99th=[42206] 00:34:51.820 bw ( KiB/s): min= 96, max= 312, per=2.51%, avg=172.80, stdev=103.48, samples=5 00:34:51.820 iops : min= 24, max= 78, avg=43.20, stdev=25.87, samples=5 00:34:51.820 lat (usec) : 750=4.17%, 1000=23.33% 00:34:51.820 lat (msec) : 2=13.33%, 50=58.33% 00:34:51.820 cpu : usr=0.03%, sys=0.17%, ctx=125, majf=0, minf=1 00:34:51.820 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:51.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.820 complete : 0=0.8%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.820 issued rwts: total=120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.820 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:51.820 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3010610: Wed Nov 20 11:35:44 2024 00:34:51.820 read: IOPS=851, BW=3404KiB/s (3486kB/s)(10.5MiB/3155msec) 00:34:51.820 slat (usec): min=6, max=32912, avg=69.32, stdev=973.79 00:34:51.820 clat (usec): min=201, max=1552, avg=1088.96, stdev=126.48 00:34:51.820 lat (usec): min=226, max=33531, avg=1158.30, stdev=977.48 00:34:51.820 clat percentiles (usec): 00:34:51.820 | 1.00th=[ 816], 5.00th=[ 906], 10.00th=[ 947], 20.00th=[ 988], 00:34:51.820 | 30.00th=[ 1012], 40.00th=[ 1037], 50.00th=[ 1074], 60.00th=[ 1123], 00:34:51.820 | 70.00th=[ 1172], 80.00th=[ 1205], 90.00th=[ 1254], 95.00th=[ 1287], 00:34:51.820 | 99.00th=[ 1352], 99.50th=[ 1418], 99.90th=[ 1516], 99.95th=[ 1549], 00:34:51.820 | 99.99th=[ 1549] 00:34:51.820 bw ( KiB/s): min= 2765, max= 3872, per=50.44%, avg=3451.50, stdev=405.49, samples=6 00:34:51.820 iops : min= 691, max= 968, avg=862.83, stdev=101.46, samples=6 00:34:51.820 lat (usec) : 250=0.04%, 750=0.34%, 1000=25.73% 00:34:51.820 lat (msec) : 2=73.86% 00:34:51.820 cpu : usr=1.11%, sys=3.04%, ctx=2692, majf=0, minf=2 00:34:51.820 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:51.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.820 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.820 issued rwts: total=2686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.820 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:51.820 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3010614: Wed Nov 20 11:35:44 2024 00:34:51.820 read: IOPS=907, BW=3628KiB/s (3715kB/s)(9.87MiB/2785msec) 00:34:51.820 slat (nsec): min=24269, max=60337, avg=25634.58, stdev=2891.37 00:34:51.820 clat (usec): min=615, max=1352, avg=1061.80, stdev=88.50 00:34:51.820 lat (usec): min=641, max=1378, avg=1087.43, stdev=88.37 00:34:51.820 clat percentiles (usec): 00:34:51.820 | 1.00th=[ 807], 5.00th=[ 889], 10.00th=[ 947], 20.00th=[ 1004], 00:34:51.820 | 30.00th=[ 1029], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1090], 00:34:51.820 | 70.00th=[ 1106], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1188], 00:34:51.820 | 99.00th=[ 1237], 99.50th=[ 1254], 99.90th=[ 1287], 99.95th=[ 1303], 00:34:51.820 | 99.99th=[ 1352] 00:34:51.820 bw ( KiB/s): min= 3632, max= 3680, per=53.57%, avg=3665.60, stdev=21.47, samples=5 00:34:51.820 iops : min= 908, max= 920, avg=916.40, stdev= 5.37, samples=5 00:34:51.820 lat (usec) : 750=0.16%, 1000=19.23% 00:34:51.820 lat (msec) : 2=80.57% 00:34:51.820 cpu : usr=1.11%, sys=2.66%, ctx=2528, majf=0, minf=2 00:34:51.820 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:51.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.820 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.820 issued rwts: total=2527,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.820 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:51.820 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3010618: Wed Nov 20 11:35:44 2024 00:34:51.820 read: IOPS=25, BW=101KiB/s (103kB/s)(264KiB/2624msec) 00:34:51.820 slat (nsec): min=25484, max=73944, avg=26881.85, stdev=5873.53 00:34:51.820 clat (usec): min=724, max=42166, avg=39400.22, stdev=9858.10 00:34:51.820 lat (usec): min=798, max=42192, avg=39427.10, stdev=9855.00 00:34:51.820 clat percentiles (usec): 00:34:51.820 | 1.00th=[ 725], 5.00th=[ 1270], 10.00th=[41157], 20.00th=[41681], 00:34:51.820 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:34:51.820 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:51.820 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:51.820 | 99.99th=[42206] 00:34:51.820 bw ( KiB/s): min= 96, max= 112, per=1.45%, avg=99.20, stdev= 7.16, samples=5 00:34:51.820 iops : min= 24, max= 28, avg=24.80, stdev= 1.79, samples=5 00:34:51.820 lat (usec) : 750=1.49%, 1000=2.99% 00:34:51.820 lat (msec) : 2=1.49%, 50=92.54% 00:34:51.820 cpu : usr=0.00%, sys=0.11%, ctx=67, majf=0, minf=2 00:34:51.820 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:51.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.820 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.820 issued rwts: total=67,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.820 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:51.820 00:34:51.820 Run status group 0 (all jobs): 00:34:51.820 READ: bw=6841KiB/s (7005kB/s), 101KiB/s-3628KiB/s (103kB/s-3715kB/s), io=21.1MiB (22.1MB), run=2624-3155msec 00:34:51.820 00:34:51.820 Disk stats (read/write): 00:34:51.820 nvme0n1: ios=145/0, merge=0/0, ticks=3564/0, in_queue=3564, util=99.30% 00:34:51.820 nvme0n2: ios=2646/0, merge=0/0, ticks=2765/0, in_queue=2765, util=92.16% 00:34:51.820 nvme0n3: ios=2367/0, merge=0/0, ticks=2455/0, in_queue=2455, util=96.03% 00:34:51.820 nvme0n4: ios=65/0, merge=0/0, ticks=2561/0, in_queue=2561, util=96.46% 00:34:51.820 11:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:51.820 11:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:52.081 11:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:52.081 11:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:52.081 11:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:52.081 11:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:52.343 11:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:52.343 11:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:52.603 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:52.603 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3010401 00:34:52.603 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:52.603 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:52.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:52.603 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:52.603 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:34:52.603 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:52.603 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:52.603 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:52.603 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:52.603 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:34:52.604 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:52.604 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:52.604 nvmf hotplug test: fio failed as expected 00:34:52.604 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:52.864 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:52.864 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:52.864 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:52.864 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:52.864 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:52.864 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:52.864 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:34:52.864 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:52.864 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:34:52.864 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:52.864 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:52.864 rmmod nvme_tcp 00:34:52.864 rmmod nvme_fabrics 00:34:52.864 rmmod nvme_keyring 00:34:52.864 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:52.864 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:34:52.864 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:34:52.864 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3007239 ']' 00:34:52.864 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3007239 00:34:52.864 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3007239 ']' 00:34:52.864 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3007239 00:34:52.864 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:34:52.864 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:52.864 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3007239 00:34:53.124 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:53.124 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:53.124 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3007239' 00:34:53.124 killing process with pid 3007239 00:34:53.124 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3007239 00:34:53.124 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3007239 00:34:53.124 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:53.124 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:53.124 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:53.124 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:34:53.124 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:34:53.124 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:53.124 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:53.124 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:53.124 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:53.124 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:53.124 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:53.124 11:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:55.673 11:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:55.673 00:34:55.673 real 0m28.301s 00:34:55.673 user 2m16.168s 00:34:55.673 sys 0m12.050s 00:34:55.673 11:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:55.673 11:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:55.673 ************************************ 00:34:55.673 END TEST nvmf_fio_target 00:34:55.673 ************************************ 00:34:55.673 11:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:55.673 11:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:55.673 11:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:55.673 11:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:55.673 ************************************ 00:34:55.673 START TEST nvmf_bdevio 00:34:55.673 ************************************ 00:34:55.673 11:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:55.673 * Looking for test storage... 00:34:55.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:55.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:55.673 --rc genhtml_branch_coverage=1 00:34:55.673 --rc genhtml_function_coverage=1 00:34:55.673 --rc genhtml_legend=1 00:34:55.673 --rc geninfo_all_blocks=1 00:34:55.673 --rc geninfo_unexecuted_blocks=1 00:34:55.673 00:34:55.673 ' 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:55.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:55.673 --rc genhtml_branch_coverage=1 00:34:55.673 --rc genhtml_function_coverage=1 00:34:55.673 --rc genhtml_legend=1 00:34:55.673 --rc geninfo_all_blocks=1 00:34:55.673 --rc geninfo_unexecuted_blocks=1 00:34:55.673 00:34:55.673 ' 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:55.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:55.673 --rc genhtml_branch_coverage=1 00:34:55.673 --rc genhtml_function_coverage=1 00:34:55.673 --rc genhtml_legend=1 00:34:55.673 --rc geninfo_all_blocks=1 00:34:55.673 --rc geninfo_unexecuted_blocks=1 00:34:55.673 00:34:55.673 ' 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:55.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:55.673 --rc genhtml_branch_coverage=1 00:34:55.673 --rc genhtml_function_coverage=1 00:34:55.673 --rc genhtml_legend=1 00:34:55.673 --rc geninfo_all_blocks=1 00:34:55.673 --rc geninfo_unexecuted_blocks=1 00:34:55.673 00:34:55.673 ' 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:55.673 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:55.674 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:55.674 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:55.674 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:55.674 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:55.674 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.674 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.674 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.674 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:55.674 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.674 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:34:55.674 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:55.674 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:55.674 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:55.674 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:55.674 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:55.674 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:55.674 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:55.674 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:55.674 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:55.674 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:55.674 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:55.674 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:55.674 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:55.674 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:55.674 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:55.674 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:55.674 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:55.674 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:55.674 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:55.674 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:55.674 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:55.674 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:55.674 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:55.674 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:34:55.674 11:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:03.821 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:03.821 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:03.821 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:03.821 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:03.821 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:03.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:03.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.498 ms 00:35:03.822 00:35:03.822 --- 10.0.0.2 ping statistics --- 00:35:03.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:03.822 rtt min/avg/max/mdev = 0.498/0.498/0.498/0.000 ms 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:03.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:03.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:35:03.822 00:35:03.822 --- 10.0.0.1 ping statistics --- 00:35:03.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:03.822 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3015647 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3015647 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3015647 ']' 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:03.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:03.822 11:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:03.822 [2024-11-20 11:35:55.687820] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:03.822 [2024-11-20 11:35:55.688942] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:35:03.822 [2024-11-20 11:35:55.688993] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:03.822 [2024-11-20 11:35:55.787178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:03.822 [2024-11-20 11:35:55.840412] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:03.822 [2024-11-20 11:35:55.840462] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:03.822 [2024-11-20 11:35:55.840470] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:03.822 [2024-11-20 11:35:55.840478] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:03.822 [2024-11-20 11:35:55.840484] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:03.822 [2024-11-20 11:35:55.842515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:03.822 [2024-11-20 11:35:55.842676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:03.822 [2024-11-20 11:35:55.842834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:03.822 [2024-11-20 11:35:55.842836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:03.822 [2024-11-20 11:35:55.920007] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:03.822 [2024-11-20 11:35:55.920744] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:03.822 [2024-11-20 11:35:55.921144] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:03.822 [2024-11-20 11:35:55.921664] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:03.822 [2024-11-20 11:35:55.921701] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:03.822 11:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:03.822 11:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:35:03.822 11:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:03.822 11:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:03.822 11:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:03.822 11:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:03.822 11:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:03.822 11:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.822 11:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:03.822 [2024-11-20 11:35:56.555697] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:04.083 11:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.083 11:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:04.083 11:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.083 11:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:04.083 Malloc0 00:35:04.083 11:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.083 11:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:04.083 11:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.084 11:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:04.084 11:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.084 11:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:04.084 11:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.084 11:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:04.084 11:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.084 11:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:04.084 11:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.084 11:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:04.084 [2024-11-20 11:35:56.655819] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:04.084 11:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.084 11:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:35:04.084 11:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:35:04.084 11:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:35:04.084 11:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:35:04.084 11:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:04.084 11:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:04.084 { 00:35:04.084 "params": { 00:35:04.084 "name": "Nvme$subsystem", 00:35:04.084 "trtype": "$TEST_TRANSPORT", 00:35:04.084 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:04.084 "adrfam": "ipv4", 00:35:04.084 "trsvcid": "$NVMF_PORT", 00:35:04.084 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:04.084 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:04.084 "hdgst": ${hdgst:-false}, 00:35:04.084 "ddgst": ${ddgst:-false} 00:35:04.084 }, 00:35:04.084 "method": "bdev_nvme_attach_controller" 00:35:04.084 } 00:35:04.084 EOF 00:35:04.084 )") 00:35:04.084 11:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:35:04.084 11:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:35:04.084 11:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:35:04.084 11:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:04.084 "params": { 00:35:04.084 "name": "Nvme1", 00:35:04.084 "trtype": "tcp", 00:35:04.084 "traddr": "10.0.0.2", 00:35:04.084 "adrfam": "ipv4", 00:35:04.084 "trsvcid": "4420", 00:35:04.084 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:04.084 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:04.084 "hdgst": false, 00:35:04.084 "ddgst": false 00:35:04.084 }, 00:35:04.084 "method": "bdev_nvme_attach_controller" 00:35:04.084 }' 00:35:04.084 [2024-11-20 11:35:56.715024] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:35:04.084 [2024-11-20 11:35:56.715099] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3015964 ] 00:35:04.084 [2024-11-20 11:35:56.809775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:04.345 [2024-11-20 11:35:56.866181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:04.345 [2024-11-20 11:35:56.866233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:04.345 [2024-11-20 11:35:56.866255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:04.345 I/O targets: 00:35:04.345 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:35:04.345 00:35:04.345 00:35:04.345 CUnit - A unit testing framework for C - Version 2.1-3 00:35:04.345 http://cunit.sourceforge.net/ 00:35:04.345 00:35:04.345 00:35:04.345 Suite: bdevio tests on: Nvme1n1 00:35:04.607 Test: blockdev write read block ...passed 00:35:04.607 Test: blockdev write zeroes read block ...passed 00:35:04.607 Test: blockdev write zeroes read no split ...passed 00:35:04.607 Test: blockdev write zeroes read split ...passed 00:35:04.607 Test: blockdev write zeroes read split partial ...passed 00:35:04.607 Test: blockdev reset ...[2024-11-20 11:35:57.277060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:35:04.607 [2024-11-20 11:35:57.277168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfeb970 (9): Bad file descriptor 00:35:04.868 [2024-11-20 11:35:57.412708] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:35:04.868 passed 00:35:04.868 Test: blockdev write read 8 blocks ...passed 00:35:04.868 Test: blockdev write read size > 128k ...passed 00:35:04.868 Test: blockdev write read invalid size ...passed 00:35:04.868 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:04.868 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:04.868 Test: blockdev write read max offset ...passed 00:35:05.129 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:05.129 Test: blockdev writev readv 8 blocks ...passed 00:35:05.129 Test: blockdev writev readv 30 x 1block ...passed 00:35:05.129 Test: blockdev writev readv block ...passed 00:35:05.129 Test: blockdev writev readv size > 128k ...passed 00:35:05.129 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:05.129 Test: blockdev comparev and writev ...[2024-11-20 11:35:57.672885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:05.129 [2024-11-20 11:35:57.672931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:05.129 [2024-11-20 11:35:57.672948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:05.129 [2024-11-20 11:35:57.672957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:05.129 [2024-11-20 11:35:57.673439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:05.129 [2024-11-20 11:35:57.673452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:05.129 [2024-11-20 11:35:57.673466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:05.129 [2024-11-20 11:35:57.673475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:05.129 [2024-11-20 11:35:57.673958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:05.129 [2024-11-20 11:35:57.673971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:05.129 [2024-11-20 11:35:57.673985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:05.129 [2024-11-20 11:35:57.673994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:05.129 [2024-11-20 11:35:57.674466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:05.129 [2024-11-20 11:35:57.674479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:05.129 [2024-11-20 11:35:57.674493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:05.129 [2024-11-20 11:35:57.674501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:05.129 passed 00:35:05.129 Test: blockdev nvme passthru rw ...passed 00:35:05.129 Test: blockdev nvme passthru vendor specific ...[2024-11-20 11:35:57.758592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:05.129 [2024-11-20 11:35:57.758611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:05.129 [2024-11-20 11:35:57.758850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:05.129 [2024-11-20 11:35:57.758862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:05.129 [2024-11-20 11:35:57.759083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:05.129 [2024-11-20 11:35:57.759093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:05.129 [2024-11-20 11:35:57.759322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:05.129 [2024-11-20 11:35:57.759333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:05.129 passed 00:35:05.129 Test: blockdev nvme admin passthru ...passed 00:35:05.129 Test: blockdev copy ...passed 00:35:05.129 00:35:05.129 Run Summary: Type Total Ran Passed Failed Inactive 00:35:05.129 suites 1 1 n/a 0 0 00:35:05.129 tests 23 23 23 0 0 00:35:05.129 asserts 152 152 152 0 n/a 00:35:05.129 00:35:05.129 Elapsed time = 1.512 seconds 00:35:05.392 11:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:05.392 11:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.392 11:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:05.392 11:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.392 11:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:35:05.392 11:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:35:05.392 11:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:05.392 11:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:35:05.392 11:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:05.392 11:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:35:05.392 11:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:05.392 11:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:05.392 rmmod nvme_tcp 00:35:05.392 rmmod nvme_fabrics 00:35:05.392 rmmod nvme_keyring 00:35:05.392 11:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:05.392 11:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:35:05.392 11:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:35:05.392 11:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3015647 ']' 00:35:05.392 11:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3015647 00:35:05.392 11:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3015647 ']' 00:35:05.392 11:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3015647 00:35:05.392 11:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:35:05.392 11:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:05.392 11:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3015647 00:35:05.392 11:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:35:05.392 11:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:35:05.392 11:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3015647' 00:35:05.392 killing process with pid 3015647 00:35:05.392 11:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3015647 00:35:05.392 11:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3015647 00:35:05.653 11:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:05.653 11:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:05.653 11:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:05.653 11:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:35:05.653 11:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:35:05.653 11:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:05.653 11:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:35:05.653 11:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:05.653 11:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:05.653 11:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:05.653 11:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:05.653 11:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:08.207 11:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:08.207 00:35:08.207 real 0m12.433s 00:35:08.207 user 0m10.560s 00:35:08.207 sys 0m6.600s 00:35:08.207 11:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:08.207 11:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:08.207 ************************************ 00:35:08.207 END TEST nvmf_bdevio 00:35:08.207 ************************************ 00:35:08.207 11:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:35:08.207 00:35:08.207 real 5m1.651s 00:35:08.207 user 10m17.659s 00:35:08.207 sys 2m6.018s 00:35:08.207 11:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:08.207 11:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:08.208 ************************************ 00:35:08.208 END TEST nvmf_target_core_interrupt_mode 00:35:08.208 ************************************ 00:35:08.208 11:36:00 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:08.208 11:36:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:08.208 11:36:00 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:08.208 11:36:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:08.208 ************************************ 00:35:08.208 START TEST nvmf_interrupt 00:35:08.208 ************************************ 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:08.208 * Looking for test storage... 00:35:08.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:08.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.208 --rc genhtml_branch_coverage=1 00:35:08.208 --rc genhtml_function_coverage=1 00:35:08.208 --rc genhtml_legend=1 00:35:08.208 --rc geninfo_all_blocks=1 00:35:08.208 --rc geninfo_unexecuted_blocks=1 00:35:08.208 00:35:08.208 ' 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:08.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.208 --rc genhtml_branch_coverage=1 00:35:08.208 --rc genhtml_function_coverage=1 00:35:08.208 --rc genhtml_legend=1 00:35:08.208 --rc geninfo_all_blocks=1 00:35:08.208 --rc geninfo_unexecuted_blocks=1 00:35:08.208 00:35:08.208 ' 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:08.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.208 --rc genhtml_branch_coverage=1 00:35:08.208 --rc genhtml_function_coverage=1 00:35:08.208 --rc genhtml_legend=1 00:35:08.208 --rc geninfo_all_blocks=1 00:35:08.208 --rc geninfo_unexecuted_blocks=1 00:35:08.208 00:35:08.208 ' 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:08.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.208 --rc genhtml_branch_coverage=1 00:35:08.208 --rc genhtml_function_coverage=1 00:35:08.208 --rc genhtml_legend=1 00:35:08.208 --rc geninfo_all_blocks=1 00:35:08.208 --rc geninfo_unexecuted_blocks=1 00:35:08.208 00:35:08.208 ' 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:08.208 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:08.209 11:36:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:35:08.209 11:36:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:08.209 11:36:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:35:08.209 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:08.209 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:08.209 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:08.209 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:08.209 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:08.209 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:08.209 11:36:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:08.209 11:36:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:08.209 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:08.209 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:08.209 11:36:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:35:08.209 11:36:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:16.348 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:16.348 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:16.348 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:16.348 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:16.348 11:36:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:16.348 11:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:16.348 11:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:16.348 11:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:16.348 11:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:16.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:16.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:35:16.348 00:35:16.348 --- 10.0.0.2 ping statistics --- 00:35:16.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:16.348 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:35:16.348 11:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:16.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:16.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:35:16.348 00:35:16.348 --- 10.0.0.1 ping statistics --- 00:35:16.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:16.348 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:35:16.348 11:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:16.348 11:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:35:16.348 11:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:16.348 11:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:16.348 11:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:16.348 11:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:16.348 11:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:16.348 11:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:16.348 11:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:16.348 11:36:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:35:16.348 11:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:16.348 11:36:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:16.348 11:36:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:16.348 11:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3020417 00:35:16.348 11:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3020417 00:35:16.348 11:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:35:16.348 11:36:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 3020417 ']' 00:35:16.348 11:36:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:16.348 11:36:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:16.348 11:36:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:16.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:16.348 11:36:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:16.348 11:36:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:16.348 [2024-11-20 11:36:08.186752] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:16.348 [2024-11-20 11:36:08.187714] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:35:16.348 [2024-11-20 11:36:08.187751] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:16.348 [2024-11-20 11:36:08.279474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:16.348 [2024-11-20 11:36:08.315210] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:16.348 [2024-11-20 11:36:08.315240] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:16.348 [2024-11-20 11:36:08.315249] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:16.348 [2024-11-20 11:36:08.315256] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:16.348 [2024-11-20 11:36:08.315262] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:16.349 [2024-11-20 11:36:08.316499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:16.349 [2024-11-20 11:36:08.316587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:16.349 [2024-11-20 11:36:08.372577] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:16.349 [2024-11-20 11:36:08.373308] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:16.349 [2024-11-20 11:36:08.373575] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:16.349 11:36:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:16.349 11:36:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:35:16.349 11:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:16.349 11:36:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:16.349 11:36:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:16.349 11:36:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:16.349 11:36:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:35:16.349 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:35:16.349 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:35:16.349 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:35:16.349 5000+0 records in 00:35:16.349 5000+0 records out 00:35:16.349 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0194184 s, 527 MB/s 00:35:16.349 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:35:16.349 11:36:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.349 11:36:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:16.610 AIO0 00:35:16.610 11:36:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.610 11:36:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:35:16.610 11:36:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.610 11:36:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:16.610 [2024-11-20 11:36:09.117438] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:16.610 11:36:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.610 11:36:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:16.610 11:36:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.610 11:36:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:16.610 11:36:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.610 11:36:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:35:16.610 11:36:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.610 11:36:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:16.610 11:36:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.610 11:36:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:16.610 11:36:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.610 11:36:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:16.610 [2024-11-20 11:36:09.162031] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:16.610 11:36:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.610 11:36:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:16.610 11:36:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3020417 0 00:35:16.610 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3020417 0 idle 00:35:16.610 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3020417 00:35:16.610 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:16.610 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:16.610 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:16.610 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:16.610 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:16.610 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:16.610 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:16.610 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:16.610 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:16.610 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3020417 -w 256 00:35:16.610 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:16.610 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3020417 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.26 reactor_0' 00:35:16.610 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3020417 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.26 reactor_0 00:35:16.610 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:16.610 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3020417 1 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3020417 1 idle 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3020417 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3020417 -w 256 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3020421 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1' 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3020421 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3020996 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3020417 0 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3020417 0 busy 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3020417 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3020417 -w 256 00:35:16.870 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:17.130 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3020417 root 20 0 128.2g 44928 32256 R 50.0 0.0 0:00.35 reactor_0' 00:35:17.130 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3020417 root 20 0 128.2g 44928 32256 R 50.0 0.0 0:00.35 reactor_0 00:35:17.130 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:17.130 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:17.130 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=50.0 00:35:17.130 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=50 00:35:17.130 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:17.130 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:17.130 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:17.130 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:17.130 11:36:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:17.130 11:36:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:17.130 11:36:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3020417 1 00:35:17.130 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3020417 1 busy 00:35:17.130 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3020417 00:35:17.130 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:17.130 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:17.130 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:17.130 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:17.130 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:17.130 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:17.130 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:17.130 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:17.130 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3020417 -w 256 00:35:17.130 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:17.390 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3020421 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.22 reactor_1' 00:35:17.390 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3020421 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.22 reactor_1 00:35:17.390 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:17.390 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:17.390 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:17.390 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:17.390 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:17.390 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:17.390 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:17.390 11:36:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:17.390 11:36:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3020996 00:35:27.385 Initializing NVMe Controllers 00:35:27.385 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:27.385 Controller IO queue size 256, less than required. 00:35:27.385 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:27.385 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:27.385 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:27.385 Initialization complete. Launching workers. 00:35:27.385 ======================================================== 00:35:27.385 Latency(us) 00:35:27.385 Device Information : IOPS MiB/s Average min max 00:35:27.385 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 19980.30 78.05 12817.04 3624.52 30987.50 00:35:27.385 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19086.80 74.56 13414.66 7664.81 28317.94 00:35:27.385 ======================================================== 00:35:27.385 Total : 39067.09 152.61 13109.02 3624.52 30987.50 00:35:27.385 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3020417 0 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3020417 0 idle 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3020417 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3020417 -w 256 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3020417 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.03 reactor_0' 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3020417 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.03 reactor_0 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3020417 1 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3020417 1 idle 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3020417 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3020417 -w 256 00:35:27.385 11:36:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:27.385 11:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3020421 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:09.77 reactor_1' 00:35:27.385 11:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3020421 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:09.77 reactor_1 00:35:27.385 11:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:27.385 11:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:27.385 11:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:27.385 11:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:27.385 11:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:27.385 11:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:27.385 11:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:27.385 11:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:27.385 11:36:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:28.327 11:36:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:35:28.327 11:36:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:35:28.327 11:36:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:28.327 11:36:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:35:28.327 11:36:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:35:30.288 11:36:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:30.288 11:36:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:30.288 11:36:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:30.288 11:36:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:35:30.288 11:36:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:30.288 11:36:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:35:30.288 11:36:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:30.288 11:36:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3020417 0 00:35:30.288 11:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3020417 0 idle 00:35:30.288 11:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3020417 00:35:30.288 11:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:30.288 11:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:30.288 11:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:30.288 11:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:30.288 11:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:30.288 11:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:30.288 11:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:30.288 11:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:30.288 11:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:30.288 11:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3020417 -w 256 00:35:30.288 11:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:30.288 11:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3020417 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.41 reactor_0' 00:35:30.288 11:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3020417 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.41 reactor_0 00:35:30.288 11:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:30.288 11:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:30.288 11:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:30.288 11:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:30.288 11:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:30.288 11:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:30.289 11:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:30.289 11:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:30.289 11:36:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:30.289 11:36:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3020417 1 00:35:30.289 11:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3020417 1 idle 00:35:30.289 11:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3020417 00:35:30.289 11:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:30.289 11:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:30.289 11:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:30.289 11:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:30.289 11:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:30.289 11:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:30.289 11:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:30.289 11:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:30.289 11:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:30.289 11:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3020417 -w 256 00:35:30.289 11:36:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:30.549 11:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3020421 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:09.91 reactor_1' 00:35:30.549 11:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3020421 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:09.91 reactor_1 00:35:30.549 11:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:30.549 11:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:30.549 11:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:30.549 11:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:30.549 11:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:30.549 11:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:30.549 11:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:30.549 11:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:30.549 11:36:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:30.810 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:30.810 11:36:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:30.810 11:36:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:35:30.810 11:36:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:30.810 11:36:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:30.810 11:36:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:30.810 11:36:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:30.810 11:36:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:35:30.810 11:36:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:35:30.810 11:36:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:35:30.810 11:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:30.810 11:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:35:30.810 11:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:30.810 11:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:35:30.810 11:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:30.810 11:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:30.810 rmmod nvme_tcp 00:35:30.810 rmmod nvme_fabrics 00:35:30.810 rmmod nvme_keyring 00:35:30.810 11:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:30.810 11:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:35:30.810 11:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:35:30.810 11:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3020417 ']' 00:35:30.810 11:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3020417 00:35:30.810 11:36:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 3020417 ']' 00:35:30.810 11:36:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 3020417 00:35:30.810 11:36:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:35:30.810 11:36:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:30.810 11:36:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3020417 00:35:30.810 11:36:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:30.810 11:36:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:30.810 11:36:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3020417' 00:35:30.810 killing process with pid 3020417 00:35:30.810 11:36:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 3020417 00:35:30.810 11:36:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 3020417 00:35:31.071 11:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:31.071 11:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:31.071 11:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:31.071 11:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:35:31.071 11:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:35:31.071 11:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:31.071 11:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:35:31.071 11:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:31.071 11:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:31.071 11:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:31.071 11:36:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:31.071 11:36:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:33.615 11:36:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:33.615 00:35:33.615 real 0m25.266s 00:35:33.615 user 0m39.749s 00:35:33.615 sys 0m9.958s 00:35:33.615 11:36:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:33.615 11:36:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:33.615 ************************************ 00:35:33.615 END TEST nvmf_interrupt 00:35:33.615 ************************************ 00:35:33.615 00:35:33.615 real 30m11.372s 00:35:33.615 user 61m47.911s 00:35:33.615 sys 10m20.838s 00:35:33.615 11:36:25 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:33.615 11:36:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:33.615 ************************************ 00:35:33.615 END TEST nvmf_tcp 00:35:33.615 ************************************ 00:35:33.615 11:36:25 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:35:33.615 11:36:25 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:33.615 11:36:25 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:33.615 11:36:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:33.615 11:36:25 -- common/autotest_common.sh@10 -- # set +x 00:35:33.615 ************************************ 00:35:33.615 START TEST spdkcli_nvmf_tcp 00:35:33.615 ************************************ 00:35:33.615 11:36:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:33.615 * Looking for test storage... 00:35:33.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:35:33.615 11:36:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:33.615 11:36:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:35:33.615 11:36:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:33.615 11:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:33.615 11:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:33.615 11:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:33.615 11:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:33.615 11:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:35:33.615 11:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:35:33.615 11:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:35:33.615 11:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:35:33.615 11:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:35:33.615 11:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:35:33.615 11:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:35:33.615 11:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:33.615 11:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:35:33.615 11:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:35:33.615 11:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:33.615 11:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:33.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:33.616 --rc genhtml_branch_coverage=1 00:35:33.616 --rc genhtml_function_coverage=1 00:35:33.616 --rc genhtml_legend=1 00:35:33.616 --rc geninfo_all_blocks=1 00:35:33.616 --rc geninfo_unexecuted_blocks=1 00:35:33.616 00:35:33.616 ' 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:33.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:33.616 --rc genhtml_branch_coverage=1 00:35:33.616 --rc genhtml_function_coverage=1 00:35:33.616 --rc genhtml_legend=1 00:35:33.616 --rc geninfo_all_blocks=1 00:35:33.616 --rc geninfo_unexecuted_blocks=1 00:35:33.616 00:35:33.616 ' 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:33.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:33.616 --rc genhtml_branch_coverage=1 00:35:33.616 --rc genhtml_function_coverage=1 00:35:33.616 --rc genhtml_legend=1 00:35:33.616 --rc geninfo_all_blocks=1 00:35:33.616 --rc geninfo_unexecuted_blocks=1 00:35:33.616 00:35:33.616 ' 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:33.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:33.616 --rc genhtml_branch_coverage=1 00:35:33.616 --rc genhtml_function_coverage=1 00:35:33.616 --rc genhtml_legend=1 00:35:33.616 --rc geninfo_all_blocks=1 00:35:33.616 --rc geninfo_unexecuted_blocks=1 00:35:33.616 00:35:33.616 ' 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:33.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3024437 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3024437 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 3024437 ']' 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:33.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:33.616 11:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:33.616 [2024-11-20 11:36:26.164181] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:35:33.616 [2024-11-20 11:36:26.164252] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3024437 ] 00:35:33.616 [2024-11-20 11:36:26.255048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:33.616 [2024-11-20 11:36:26.309028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:33.616 [2024-11-20 11:36:26.309034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:34.560 11:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:34.560 11:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:35:34.560 11:36:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:34.560 11:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:34.560 11:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:34.560 11:36:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:34.560 11:36:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:35:34.560 11:36:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:34.560 11:36:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:34.560 11:36:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:34.560 11:36:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:34.560 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:34.560 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:34.560 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:34.560 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:34.560 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:34.560 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:34.560 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:34.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:34.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:34.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:34.560 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:34.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:34.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:34.560 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:34.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:34.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:34.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:34.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:34.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:34.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:34.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:34.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:34.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:35:34.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:34.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:34.560 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:34.560 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:34.560 ' 00:35:37.106 [2024-11-20 11:36:29.764293] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:38.518 [2024-11-20 11:36:31.124459] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:35:41.060 [2024-11-20 11:36:33.647478] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:43.651 [2024-11-20 11:36:35.869807] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:45.035 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:45.035 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:45.035 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:45.035 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:45.035 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:45.035 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:45.035 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:45.035 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:45.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:45.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:45.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:45.035 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:45.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:45.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:45.035 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:45.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:45.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:45.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:45.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:45.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:45.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:45.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:45.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:45.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:45.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:45.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:45.035 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:45.035 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:45.035 11:36:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:45.035 11:36:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:45.035 11:36:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:45.035 11:36:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:45.035 11:36:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:45.035 11:36:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:45.035 11:36:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:45.035 11:36:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:45.607 11:36:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:45.607 11:36:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:45.607 11:36:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:45.607 11:36:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:45.607 11:36:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:45.607 11:36:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:45.607 11:36:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:45.607 11:36:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:45.607 11:36:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:45.607 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:45.607 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:45.607 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:45.607 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:45.607 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:45.607 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:45.607 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:45.607 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:45.607 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:45.607 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:45.607 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:45.607 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:45.607 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:45.607 ' 00:35:52.195 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:52.195 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:52.195 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:52.195 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:52.195 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:52.195 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:52.195 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:52.195 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:52.195 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:52.195 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:52.195 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:52.195 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:52.195 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:52.195 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:52.195 11:36:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:52.195 11:36:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:52.195 11:36:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:52.195 11:36:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3024437 00:35:52.195 11:36:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3024437 ']' 00:35:52.195 11:36:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3024437 00:35:52.195 11:36:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:35:52.195 11:36:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:52.195 11:36:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3024437 00:35:52.195 11:36:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:52.195 11:36:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:52.195 11:36:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3024437' 00:35:52.195 killing process with pid 3024437 00:35:52.195 11:36:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 3024437 00:35:52.195 11:36:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 3024437 00:35:52.195 11:36:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:52.195 11:36:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:52.195 11:36:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3024437 ']' 00:35:52.195 11:36:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3024437 00:35:52.195 11:36:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3024437 ']' 00:35:52.195 11:36:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3024437 00:35:52.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3024437) - No such process 00:35:52.195 11:36:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 3024437 is not found' 00:35:52.195 Process with pid 3024437 is not found 00:35:52.195 11:36:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:52.195 11:36:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:52.196 11:36:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:52.196 00:35:52.196 real 0m18.185s 00:35:52.196 user 0m40.365s 00:35:52.196 sys 0m0.920s 00:35:52.196 11:36:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:52.196 11:36:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:52.196 ************************************ 00:35:52.196 END TEST spdkcli_nvmf_tcp 00:35:52.196 ************************************ 00:35:52.196 11:36:44 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:52.196 11:36:44 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:52.196 11:36:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:52.196 11:36:44 -- common/autotest_common.sh@10 -- # set +x 00:35:52.196 ************************************ 00:35:52.196 START TEST nvmf_identify_passthru 00:35:52.196 ************************************ 00:35:52.196 11:36:44 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:52.196 * Looking for test storage... 00:35:52.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:52.196 11:36:44 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:52.196 11:36:44 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:35:52.196 11:36:44 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:52.196 11:36:44 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:52.196 11:36:44 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:52.196 11:36:44 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:52.196 11:36:44 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:52.196 11:36:44 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:35:52.196 11:36:44 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:35:52.196 11:36:44 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:35:52.196 11:36:44 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:35:52.196 11:36:44 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:35:52.196 11:36:44 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:35:52.196 11:36:44 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:35:52.196 11:36:44 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:52.196 11:36:44 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:35:52.196 11:36:44 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:35:52.196 11:36:44 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:52.196 11:36:44 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:52.196 11:36:44 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:35:52.196 11:36:44 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:35:52.196 11:36:44 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:52.196 11:36:44 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:35:52.196 11:36:44 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:35:52.196 11:36:44 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:35:52.196 11:36:44 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:35:52.196 11:36:44 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:52.196 11:36:44 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:35:52.196 11:36:44 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:35:52.196 11:36:44 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:52.196 11:36:44 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:52.196 11:36:44 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:35:52.196 11:36:44 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:52.196 11:36:44 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:52.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:52.196 --rc genhtml_branch_coverage=1 00:35:52.196 --rc genhtml_function_coverage=1 00:35:52.196 --rc genhtml_legend=1 00:35:52.196 --rc geninfo_all_blocks=1 00:35:52.196 --rc geninfo_unexecuted_blocks=1 00:35:52.196 00:35:52.196 ' 00:35:52.196 11:36:44 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:52.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:52.196 --rc genhtml_branch_coverage=1 00:35:52.196 --rc genhtml_function_coverage=1 00:35:52.196 --rc genhtml_legend=1 00:35:52.196 --rc geninfo_all_blocks=1 00:35:52.196 --rc geninfo_unexecuted_blocks=1 00:35:52.196 00:35:52.196 ' 00:35:52.196 11:36:44 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:52.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:52.196 --rc genhtml_branch_coverage=1 00:35:52.196 --rc genhtml_function_coverage=1 00:35:52.196 --rc genhtml_legend=1 00:35:52.196 --rc geninfo_all_blocks=1 00:35:52.196 --rc geninfo_unexecuted_blocks=1 00:35:52.196 00:35:52.196 ' 00:35:52.196 11:36:44 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:52.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:52.196 --rc genhtml_branch_coverage=1 00:35:52.196 --rc genhtml_function_coverage=1 00:35:52.196 --rc genhtml_legend=1 00:35:52.196 --rc geninfo_all_blocks=1 00:35:52.196 --rc geninfo_unexecuted_blocks=1 00:35:52.196 00:35:52.196 ' 00:35:52.196 11:36:44 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:52.196 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:52.196 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:52.196 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:52.196 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:52.196 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:52.196 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:52.196 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:52.196 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:52.196 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:52.196 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:52.196 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:52.196 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:52.196 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:52.196 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:52.196 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:52.196 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:52.196 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:52.196 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:52.196 11:36:44 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:52.196 11:36:44 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:52.196 11:36:44 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:52.196 11:36:44 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:52.196 11:36:44 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.196 11:36:44 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.196 11:36:44 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.196 11:36:44 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:52.196 11:36:44 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.196 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:35:52.196 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:52.196 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:52.196 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:52.196 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:52.196 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:52.196 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:52.196 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:52.196 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:52.196 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:52.196 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:52.196 11:36:44 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:52.196 11:36:44 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:52.196 11:36:44 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:52.196 11:36:44 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:52.196 11:36:44 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:52.197 11:36:44 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.197 11:36:44 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.197 11:36:44 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.197 11:36:44 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:52.197 11:36:44 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:52.197 11:36:44 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:52.197 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:52.197 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:52.197 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:52.197 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:52.197 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:52.197 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:52.197 11:36:44 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:52.197 11:36:44 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:52.197 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:52.197 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:52.197 11:36:44 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:35:52.197 11:36:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:58.783 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:58.783 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:35:58.783 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:58.783 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:58.783 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:58.783 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:58.783 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:58.784 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:58.784 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:58.784 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:58.784 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:58.784 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:59.045 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:59.045 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:59.045 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:59.045 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:59.045 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:59.045 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:59.045 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:59.045 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:59.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:59.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.550 ms 00:35:59.045 00:35:59.045 --- 10.0.0.2 ping statistics --- 00:35:59.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:59.045 rtt min/avg/max/mdev = 0.550/0.550/0.550/0.000 ms 00:35:59.045 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:59.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:59.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:35:59.045 00:35:59.045 --- 10.0.0.1 ping statistics --- 00:35:59.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:59.045 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:35:59.045 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:59.045 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:35:59.045 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:59.045 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:59.045 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:59.045 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:59.045 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:59.045 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:59.045 11:36:51 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:59.045 11:36:51 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:59.045 11:36:51 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:59.045 11:36:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:59.305 11:36:51 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:59.305 11:36:51 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:35:59.305 11:36:51 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:35:59.305 11:36:51 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:35:59.305 11:36:51 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:35:59.305 11:36:51 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:35:59.305 11:36:51 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:35:59.305 11:36:51 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:59.305 11:36:51 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:59.305 11:36:51 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:35:59.305 11:36:51 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:35:59.305 11:36:51 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:35:59.305 11:36:51 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:35:59.305 11:36:51 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:35:59.305 11:36:51 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:35:59.305 11:36:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:59.305 11:36:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:59.305 11:36:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:59.877 11:36:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:35:59.877 11:36:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:59.877 11:36:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:59.877 11:36:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:00.447 11:36:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:36:00.447 11:36:52 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:00.447 11:36:52 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:00.447 11:36:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:00.447 11:36:52 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:00.447 11:36:52 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:00.447 11:36:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:00.447 11:36:52 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3031851 00:36:00.447 11:36:52 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:00.447 11:36:52 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:00.447 11:36:52 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3031851 00:36:00.447 11:36:52 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 3031851 ']' 00:36:00.448 11:36:52 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:00.448 11:36:52 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:00.448 11:36:52 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:00.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:00.448 11:36:52 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:00.448 11:36:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:00.448 [2024-11-20 11:36:52.999657] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:36:00.448 [2024-11-20 11:36:52.999712] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:00.448 [2024-11-20 11:36:53.092866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:00.448 [2024-11-20 11:36:53.130475] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:00.448 [2024-11-20 11:36:53.130506] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:00.448 [2024-11-20 11:36:53.130514] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:00.448 [2024-11-20 11:36:53.130523] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:00.448 [2024-11-20 11:36:53.130529] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:00.448 [2024-11-20 11:36:53.132029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:00.448 [2024-11-20 11:36:53.132192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:00.448 [2024-11-20 11:36:53.132279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:00.448 [2024-11-20 11:36:53.132279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:01.389 11:36:53 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:01.389 11:36:53 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:36:01.389 11:36:53 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:01.389 11:36:53 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.389 11:36:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:01.389 INFO: Log level set to 20 00:36:01.389 INFO: Requests: 00:36:01.389 { 00:36:01.389 "jsonrpc": "2.0", 00:36:01.389 "method": "nvmf_set_config", 00:36:01.389 "id": 1, 00:36:01.389 "params": { 00:36:01.389 "admin_cmd_passthru": { 00:36:01.389 "identify_ctrlr": true 00:36:01.389 } 00:36:01.389 } 00:36:01.389 } 00:36:01.389 00:36:01.389 INFO: response: 00:36:01.389 { 00:36:01.389 "jsonrpc": "2.0", 00:36:01.389 "id": 1, 00:36:01.389 "result": true 00:36:01.389 } 00:36:01.389 00:36:01.389 11:36:53 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.389 11:36:53 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:01.389 11:36:53 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.389 11:36:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:01.389 INFO: Setting log level to 20 00:36:01.389 INFO: Setting log level to 20 00:36:01.389 INFO: Log level set to 20 00:36:01.389 INFO: Log level set to 20 00:36:01.389 INFO: Requests: 00:36:01.389 { 00:36:01.389 "jsonrpc": "2.0", 00:36:01.389 "method": "framework_start_init", 00:36:01.389 "id": 1 00:36:01.389 } 00:36:01.389 00:36:01.389 INFO: Requests: 00:36:01.389 { 00:36:01.389 "jsonrpc": "2.0", 00:36:01.389 "method": "framework_start_init", 00:36:01.389 "id": 1 00:36:01.389 } 00:36:01.389 00:36:01.389 [2024-11-20 11:36:53.877506] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:01.389 INFO: response: 00:36:01.389 { 00:36:01.389 "jsonrpc": "2.0", 00:36:01.389 "id": 1, 00:36:01.389 "result": true 00:36:01.389 } 00:36:01.389 00:36:01.389 INFO: response: 00:36:01.389 { 00:36:01.389 "jsonrpc": "2.0", 00:36:01.389 "id": 1, 00:36:01.389 "result": true 00:36:01.389 } 00:36:01.389 00:36:01.389 11:36:53 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.389 11:36:53 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:01.389 11:36:53 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.389 11:36:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:01.389 INFO: Setting log level to 40 00:36:01.389 INFO: Setting log level to 40 00:36:01.389 INFO: Setting log level to 40 00:36:01.389 [2024-11-20 11:36:53.890837] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:01.389 11:36:53 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.389 11:36:53 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:01.389 11:36:53 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:01.389 11:36:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:01.389 11:36:53 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:36:01.389 11:36:53 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.389 11:36:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:01.650 Nvme0n1 00:36:01.650 11:36:54 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.650 11:36:54 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:36:01.650 11:36:54 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.650 11:36:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:01.650 11:36:54 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.650 11:36:54 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:01.650 11:36:54 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.650 11:36:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:01.650 11:36:54 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.650 11:36:54 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:01.650 11:36:54 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.650 11:36:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:01.650 [2024-11-20 11:36:54.280292] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:01.650 11:36:54 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.650 11:36:54 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:36:01.650 11:36:54 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.650 11:36:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:01.650 [ 00:36:01.650 { 00:36:01.650 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:01.650 "subtype": "Discovery", 00:36:01.650 "listen_addresses": [], 00:36:01.650 "allow_any_host": true, 00:36:01.650 "hosts": [] 00:36:01.650 }, 00:36:01.650 { 00:36:01.650 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:01.650 "subtype": "NVMe", 00:36:01.650 "listen_addresses": [ 00:36:01.650 { 00:36:01.650 "trtype": "TCP", 00:36:01.650 "adrfam": "IPv4", 00:36:01.650 "traddr": "10.0.0.2", 00:36:01.650 "trsvcid": "4420" 00:36:01.650 } 00:36:01.650 ], 00:36:01.650 "allow_any_host": true, 00:36:01.650 "hosts": [], 00:36:01.650 "serial_number": "SPDK00000000000001", 00:36:01.650 "model_number": "SPDK bdev Controller", 00:36:01.650 "max_namespaces": 1, 00:36:01.650 "min_cntlid": 1, 00:36:01.650 "max_cntlid": 65519, 00:36:01.650 "namespaces": [ 00:36:01.650 { 00:36:01.650 "nsid": 1, 00:36:01.650 "bdev_name": "Nvme0n1", 00:36:01.650 "name": "Nvme0n1", 00:36:01.650 "nguid": "36344730526054870025384500000044", 00:36:01.650 "uuid": "36344730-5260-5487-0025-384500000044" 00:36:01.650 } 00:36:01.650 ] 00:36:01.650 } 00:36:01.650 ] 00:36:01.650 11:36:54 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.650 11:36:54 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:01.650 11:36:54 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:36:01.650 11:36:54 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:36:01.910 11:36:54 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:36:01.910 11:36:54 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:01.910 11:36:54 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:36:01.910 11:36:54 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:36:02.169 11:36:54 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:36:02.169 11:36:54 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:36:02.169 11:36:54 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:36:02.169 11:36:54 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:02.169 11:36:54 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.169 11:36:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:02.169 11:36:54 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.170 11:36:54 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:36:02.170 11:36:54 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:36:02.170 11:36:54 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:02.170 11:36:54 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:36:02.170 11:36:54 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:02.170 11:36:54 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:36:02.170 11:36:54 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:02.170 11:36:54 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:02.170 rmmod nvme_tcp 00:36:02.170 rmmod nvme_fabrics 00:36:02.170 rmmod nvme_keyring 00:36:02.170 11:36:54 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:02.170 11:36:54 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:36:02.170 11:36:54 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:36:02.170 11:36:54 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3031851 ']' 00:36:02.170 11:36:54 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3031851 00:36:02.170 11:36:54 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 3031851 ']' 00:36:02.170 11:36:54 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 3031851 00:36:02.170 11:36:54 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:36:02.170 11:36:54 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:02.170 11:36:54 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3031851 00:36:02.431 11:36:54 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:02.431 11:36:54 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:02.431 11:36:54 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3031851' 00:36:02.431 killing process with pid 3031851 00:36:02.431 11:36:54 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 3031851 00:36:02.431 11:36:54 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 3031851 00:36:02.692 11:36:55 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:02.692 11:36:55 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:02.692 11:36:55 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:02.692 11:36:55 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:36:02.692 11:36:55 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:36:02.692 11:36:55 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:02.692 11:36:55 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:36:02.692 11:36:55 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:02.692 11:36:55 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:02.692 11:36:55 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:02.692 11:36:55 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:02.692 11:36:55 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:04.608 11:36:57 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:04.608 00:36:04.608 real 0m13.172s 00:36:04.608 user 0m10.492s 00:36:04.608 sys 0m6.694s 00:36:04.608 11:36:57 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:04.608 11:36:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:04.608 ************************************ 00:36:04.608 END TEST nvmf_identify_passthru 00:36:04.608 ************************************ 00:36:04.868 11:36:57 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:04.868 11:36:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:04.868 11:36:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:04.868 11:36:57 -- common/autotest_common.sh@10 -- # set +x 00:36:04.868 ************************************ 00:36:04.868 START TEST nvmf_dif 00:36:04.868 ************************************ 00:36:04.868 11:36:57 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:04.868 * Looking for test storage... 00:36:04.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:04.868 11:36:57 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:04.868 11:36:57 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:36:04.868 11:36:57 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:04.868 11:36:57 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:04.868 11:36:57 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:04.868 11:36:57 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:04.868 11:36:57 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:04.868 11:36:57 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:36:04.868 11:36:57 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:36:04.868 11:36:57 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:36:04.868 11:36:57 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:36:04.868 11:36:57 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:36:04.868 11:36:57 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:36:04.868 11:36:57 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:36:04.868 11:36:57 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:04.868 11:36:57 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:36:04.868 11:36:57 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:36:04.868 11:36:57 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:04.868 11:36:57 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:04.868 11:36:57 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:36:04.868 11:36:57 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:36:04.868 11:36:57 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:04.868 11:36:57 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:36:04.868 11:36:57 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:36:04.868 11:36:57 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:36:04.868 11:36:57 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:36:04.868 11:36:57 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:04.868 11:36:57 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:36:04.868 11:36:57 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:36:04.868 11:36:57 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:04.868 11:36:57 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:04.868 11:36:57 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:36:04.868 11:36:57 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:04.868 11:36:57 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:04.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.868 --rc genhtml_branch_coverage=1 00:36:04.868 --rc genhtml_function_coverage=1 00:36:04.868 --rc genhtml_legend=1 00:36:04.868 --rc geninfo_all_blocks=1 00:36:04.868 --rc geninfo_unexecuted_blocks=1 00:36:04.868 00:36:04.868 ' 00:36:04.868 11:36:57 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:04.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.868 --rc genhtml_branch_coverage=1 00:36:04.868 --rc genhtml_function_coverage=1 00:36:04.868 --rc genhtml_legend=1 00:36:04.868 --rc geninfo_all_blocks=1 00:36:04.868 --rc geninfo_unexecuted_blocks=1 00:36:04.868 00:36:04.868 ' 00:36:04.868 11:36:57 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:04.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.868 --rc genhtml_branch_coverage=1 00:36:04.868 --rc genhtml_function_coverage=1 00:36:04.868 --rc genhtml_legend=1 00:36:04.868 --rc geninfo_all_blocks=1 00:36:04.868 --rc geninfo_unexecuted_blocks=1 00:36:04.868 00:36:04.868 ' 00:36:04.868 11:36:57 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:04.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.868 --rc genhtml_branch_coverage=1 00:36:04.868 --rc genhtml_function_coverage=1 00:36:04.868 --rc genhtml_legend=1 00:36:04.868 --rc geninfo_all_blocks=1 00:36:04.868 --rc geninfo_unexecuted_blocks=1 00:36:04.868 00:36:04.868 ' 00:36:04.868 11:36:57 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:04.868 11:36:57 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:36:04.868 11:36:57 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:04.868 11:36:57 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:04.868 11:36:57 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:04.868 11:36:57 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:04.868 11:36:57 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:04.868 11:36:57 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:04.868 11:36:57 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:04.868 11:36:57 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:04.868 11:36:57 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:04.868 11:36:57 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:05.129 11:36:57 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:05.129 11:36:57 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:05.129 11:36:57 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:05.129 11:36:57 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:05.129 11:36:57 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:05.129 11:36:57 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:05.129 11:36:57 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:05.129 11:36:57 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:36:05.129 11:36:57 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:05.129 11:36:57 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:05.129 11:36:57 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:05.129 11:36:57 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.129 11:36:57 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.129 11:36:57 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.129 11:36:57 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:36:05.129 11:36:57 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.129 11:36:57 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:36:05.129 11:36:57 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:05.129 11:36:57 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:05.129 11:36:57 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:05.129 11:36:57 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:05.129 11:36:57 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:05.129 11:36:57 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:05.129 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:05.129 11:36:57 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:05.129 11:36:57 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:05.129 11:36:57 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:05.129 11:36:57 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:36:05.129 11:36:57 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:05.129 11:36:57 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:05.129 11:36:57 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:36:05.129 11:36:57 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:36:05.129 11:36:57 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:05.129 11:36:57 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:05.129 11:36:57 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:05.129 11:36:57 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:05.129 11:36:57 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:05.129 11:36:57 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:05.130 11:36:57 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:05.130 11:36:57 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:05.130 11:36:57 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:05.130 11:36:57 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:05.130 11:36:57 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:36:05.130 11:36:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:13.274 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:13.274 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:13.274 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:13.274 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:13.274 11:37:04 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:13.274 11:37:05 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:13.274 11:37:05 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:13.275 11:37:05 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:13.275 11:37:05 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:13.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:13.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:36:13.275 00:36:13.275 --- 10.0.0.2 ping statistics --- 00:36:13.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:13.275 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:36:13.275 11:37:05 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:13.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:13.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:36:13.275 00:36:13.275 --- 10.0.0.1 ping statistics --- 00:36:13.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:13.275 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:36:13.275 11:37:05 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:13.275 11:37:05 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:36:13.275 11:37:05 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:36:13.275 11:37:05 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:15.819 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:15.819 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:15.819 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:15.819 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:15.819 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:15.819 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:15.819 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:15.819 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:15.819 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:15.819 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:36:15.819 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:15.819 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:15.819 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:15.819 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:15.819 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:15.819 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:15.819 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:16.391 11:37:08 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:16.391 11:37:08 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:16.391 11:37:08 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:16.391 11:37:08 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:16.391 11:37:08 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:16.391 11:37:08 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:16.391 11:37:08 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:16.391 11:37:08 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:36:16.391 11:37:08 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:16.391 11:37:08 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:16.391 11:37:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:16.391 11:37:08 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3037988 00:36:16.391 11:37:08 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3037988 00:36:16.391 11:37:08 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:16.391 11:37:08 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 3037988 ']' 00:36:16.391 11:37:08 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:16.391 11:37:08 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:16.391 11:37:08 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:16.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:16.391 11:37:08 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:16.391 11:37:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:16.391 [2024-11-20 11:37:08.973737] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:36:16.391 [2024-11-20 11:37:08.973805] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:16.391 [2024-11-20 11:37:09.074031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:16.391 [2024-11-20 11:37:09.125306] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:16.391 [2024-11-20 11:37:09.125356] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:16.391 [2024-11-20 11:37:09.125364] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:16.391 [2024-11-20 11:37:09.125371] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:16.391 [2024-11-20 11:37:09.125383] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:16.391 [2024-11-20 11:37:09.126195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:17.335 11:37:09 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:17.335 11:37:09 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:36:17.335 11:37:09 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:17.335 11:37:09 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:17.335 11:37:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:17.335 11:37:09 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:17.335 11:37:09 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:36:17.335 11:37:09 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:17.335 11:37:09 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.335 11:37:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:17.335 [2024-11-20 11:37:09.831546] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:17.335 11:37:09 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.335 11:37:09 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:17.335 11:37:09 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:17.335 11:37:09 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:17.335 11:37:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:17.335 ************************************ 00:36:17.335 START TEST fio_dif_1_default 00:36:17.335 ************************************ 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:17.335 bdev_null0 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:17.335 [2024-11-20 11:37:09.924009] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:17.335 { 00:36:17.335 "params": { 00:36:17.335 "name": "Nvme$subsystem", 00:36:17.335 "trtype": "$TEST_TRANSPORT", 00:36:17.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:17.335 "adrfam": "ipv4", 00:36:17.335 "trsvcid": "$NVMF_PORT", 00:36:17.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:17.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:17.335 "hdgst": ${hdgst:-false}, 00:36:17.335 "ddgst": ${ddgst:-false} 00:36:17.335 }, 00:36:17.335 "method": "bdev_nvme_attach_controller" 00:36:17.335 } 00:36:17.335 EOF 00:36:17.335 )") 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:17.335 "params": { 00:36:17.335 "name": "Nvme0", 00:36:17.335 "trtype": "tcp", 00:36:17.335 "traddr": "10.0.0.2", 00:36:17.335 "adrfam": "ipv4", 00:36:17.335 "trsvcid": "4420", 00:36:17.335 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:17.335 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:17.335 "hdgst": false, 00:36:17.335 "ddgst": false 00:36:17.335 }, 00:36:17.335 "method": "bdev_nvme_attach_controller" 00:36:17.335 }' 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:17.335 11:37:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:17.335 11:37:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:17.335 11:37:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:17.335 11:37:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:17.336 11:37:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:17.905 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:17.905 fio-3.35 00:36:17.905 Starting 1 thread 00:36:30.156 00:36:30.156 filename0: (groupid=0, jobs=1): err= 0: pid=3038523: Wed Nov 20 11:37:20 2024 00:36:30.156 read: IOPS=97, BW=391KiB/s (400kB/s)(3920KiB/10034msec) 00:36:30.156 slat (nsec): min=5381, max=36254, avg=6139.95, stdev=1757.80 00:36:30.156 clat (usec): min=844, max=42932, avg=40938.11, stdev=2588.45 00:36:30.156 lat (usec): min=849, max=42968, avg=40944.25, stdev=2588.61 00:36:30.156 clat percentiles (usec): 00:36:30.156 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:36:30.156 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:30.156 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:36:30.156 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:36:30.156 | 99.99th=[42730] 00:36:30.156 bw ( KiB/s): min= 352, max= 448, per=99.83%, avg=390.40, stdev=19.70, samples=20 00:36:30.156 iops : min= 88, max= 112, avg=97.60, stdev= 4.92, samples=20 00:36:30.156 lat (usec) : 1000=0.41% 00:36:30.156 lat (msec) : 50=99.59% 00:36:30.156 cpu : usr=93.62%, sys=6.14%, ctx=16, majf=0, minf=222 00:36:30.156 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:30.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.156 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.156 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:30.156 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:30.156 00:36:30.156 Run status group 0 (all jobs): 00:36:30.156 READ: bw=391KiB/s (400kB/s), 391KiB/s-391KiB/s (400kB/s-400kB/s), io=3920KiB (4014kB), run=10034-10034msec 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.156 00:36:30.156 real 0m11.222s 00:36:30.156 user 0m18.298s 00:36:30.156 sys 0m1.012s 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:30.156 ************************************ 00:36:30.156 END TEST fio_dif_1_default 00:36:30.156 ************************************ 00:36:30.156 11:37:21 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:36:30.156 11:37:21 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:30.156 11:37:21 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:30.156 11:37:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:30.156 ************************************ 00:36:30.156 START TEST fio_dif_1_multi_subsystems 00:36:30.156 ************************************ 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:30.156 bdev_null0 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.156 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:30.157 [2024-11-20 11:37:21.226042] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:30.157 bdev_null1 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:30.157 { 00:36:30.157 "params": { 00:36:30.157 "name": "Nvme$subsystem", 00:36:30.157 "trtype": "$TEST_TRANSPORT", 00:36:30.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:30.157 "adrfam": "ipv4", 00:36:30.157 "trsvcid": "$NVMF_PORT", 00:36:30.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:30.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:30.157 "hdgst": ${hdgst:-false}, 00:36:30.157 "ddgst": ${ddgst:-false} 00:36:30.157 }, 00:36:30.157 "method": "bdev_nvme_attach_controller" 00:36:30.157 } 00:36:30.157 EOF 00:36:30.157 )") 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:30.157 { 00:36:30.157 "params": { 00:36:30.157 "name": "Nvme$subsystem", 00:36:30.157 "trtype": "$TEST_TRANSPORT", 00:36:30.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:30.157 "adrfam": "ipv4", 00:36:30.157 "trsvcid": "$NVMF_PORT", 00:36:30.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:30.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:30.157 "hdgst": ${hdgst:-false}, 00:36:30.157 "ddgst": ${ddgst:-false} 00:36:30.157 }, 00:36:30.157 "method": "bdev_nvme_attach_controller" 00:36:30.157 } 00:36:30.157 EOF 00:36:30.157 )") 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:30.157 "params": { 00:36:30.157 "name": "Nvme0", 00:36:30.157 "trtype": "tcp", 00:36:30.157 "traddr": "10.0.0.2", 00:36:30.157 "adrfam": "ipv4", 00:36:30.157 "trsvcid": "4420", 00:36:30.157 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:30.157 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:30.157 "hdgst": false, 00:36:30.157 "ddgst": false 00:36:30.157 }, 00:36:30.157 "method": "bdev_nvme_attach_controller" 00:36:30.157 },{ 00:36:30.157 "params": { 00:36:30.157 "name": "Nvme1", 00:36:30.157 "trtype": "tcp", 00:36:30.157 "traddr": "10.0.0.2", 00:36:30.157 "adrfam": "ipv4", 00:36:30.157 "trsvcid": "4420", 00:36:30.157 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:30.157 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:30.157 "hdgst": false, 00:36:30.157 "ddgst": false 00:36:30.157 }, 00:36:30.157 "method": "bdev_nvme_attach_controller" 00:36:30.157 }' 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:30.157 11:37:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:30.157 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:30.157 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:30.157 fio-3.35 00:36:30.157 Starting 2 threads 00:36:40.157 00:36:40.157 filename0: (groupid=0, jobs=1): err= 0: pid=3040753: Wed Nov 20 11:37:32 2024 00:36:40.157 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10002msec) 00:36:40.157 slat (nsec): min=5376, max=28657, avg=6292.53, stdev=1416.58 00:36:40.157 clat (usec): min=607, max=43939, avg=21082.20, stdev=20154.31 00:36:40.157 lat (usec): min=615, max=43968, avg=21088.49, stdev=20154.28 00:36:40.157 clat percentiles (usec): 00:36:40.157 | 1.00th=[ 734], 5.00th=[ 799], 10.00th=[ 816], 20.00th=[ 840], 00:36:40.157 | 30.00th=[ 857], 40.00th=[ 873], 50.00th=[40633], 60.00th=[41157], 00:36:40.157 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:40.157 | 99.00th=[41681], 99.50th=[41681], 99.90th=[43779], 99.95th=[43779], 00:36:40.157 | 99.99th=[43779] 00:36:40.157 bw ( KiB/s): min= 672, max= 768, per=66.27%, avg=759.58, stdev=23.47, samples=19 00:36:40.157 iops : min= 168, max= 192, avg=189.89, stdev= 5.87, samples=19 00:36:40.157 lat (usec) : 750=1.58%, 1000=46.52% 00:36:40.157 lat (msec) : 2=1.69%, 50=50.21% 00:36:40.157 cpu : usr=95.53%, sys=4.25%, ctx=9, majf=0, minf=169 00:36:40.157 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:40.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:40.157 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:40.157 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:40.157 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:40.157 filename1: (groupid=0, jobs=1): err= 0: pid=3040754: Wed Nov 20 11:37:32 2024 00:36:40.157 read: IOPS=97, BW=389KiB/s (399kB/s)(3904KiB/10030msec) 00:36:40.157 slat (nsec): min=5377, max=28606, avg=6299.10, stdev=1512.17 00:36:40.157 clat (usec): min=40875, max=43029, avg=41089.09, stdev=329.24 00:36:40.157 lat (usec): min=40883, max=43035, avg=41095.38, stdev=329.36 00:36:40.157 clat percentiles (usec): 00:36:40.157 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:36:40.157 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:40.157 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:36:40.157 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:36:40.157 | 99.99th=[43254] 00:36:40.157 bw ( KiB/s): min= 384, max= 416, per=33.88%, avg=388.80, stdev=11.72, samples=20 00:36:40.157 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:36:40.157 lat (msec) : 50=100.00% 00:36:40.157 cpu : usr=95.39%, sys=4.41%, ctx=8, majf=0, minf=47 00:36:40.157 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:40.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:40.158 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:40.158 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:40.158 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:40.158 00:36:40.158 Run status group 0 (all jobs): 00:36:40.158 READ: bw=1145KiB/s (1173kB/s), 389KiB/s-758KiB/s (399kB/s-776kB/s), io=11.2MiB (11.8MB), run=10002-10030msec 00:36:40.158 11:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:36:40.158 11:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:36:40.158 11:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:40.158 11:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:40.158 11:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:36:40.158 11:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:40.158 11:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.158 11:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:40.158 11:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.158 11:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:40.158 11:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.158 11:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:40.158 11:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.158 11:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:40.158 11:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:40.158 11:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:36:40.158 11:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:40.158 11:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.158 11:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:40.158 11:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.158 11:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:40.158 11:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.158 11:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:40.158 11:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.158 00:36:40.158 real 0m11.306s 00:36:40.158 user 0m35.441s 00:36:40.158 sys 0m1.225s 00:36:40.158 11:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:40.158 11:37:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:40.158 ************************************ 00:36:40.158 END TEST fio_dif_1_multi_subsystems 00:36:40.158 ************************************ 00:36:40.158 11:37:32 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:36:40.158 11:37:32 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:40.158 11:37:32 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:40.158 11:37:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:40.158 ************************************ 00:36:40.158 START TEST fio_dif_rand_params 00:36:40.158 ************************************ 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:40.158 bdev_null0 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:40.158 [2024-11-20 11:37:32.615218] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:40.158 { 00:36:40.158 "params": { 00:36:40.158 "name": "Nvme$subsystem", 00:36:40.158 "trtype": "$TEST_TRANSPORT", 00:36:40.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:40.158 "adrfam": "ipv4", 00:36:40.158 "trsvcid": "$NVMF_PORT", 00:36:40.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:40.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:40.158 "hdgst": ${hdgst:-false}, 00:36:40.158 "ddgst": ${ddgst:-false} 00:36:40.158 }, 00:36:40.158 "method": "bdev_nvme_attach_controller" 00:36:40.158 } 00:36:40.158 EOF 00:36:40.158 )") 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:40.158 "params": { 00:36:40.158 "name": "Nvme0", 00:36:40.158 "trtype": "tcp", 00:36:40.158 "traddr": "10.0.0.2", 00:36:40.158 "adrfam": "ipv4", 00:36:40.158 "trsvcid": "4420", 00:36:40.158 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:40.158 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:40.158 "hdgst": false, 00:36:40.158 "ddgst": false 00:36:40.158 }, 00:36:40.158 "method": "bdev_nvme_attach_controller" 00:36:40.158 }' 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:40.158 11:37:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:40.159 11:37:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:40.159 11:37:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:40.159 11:37:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:40.159 11:37:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:40.420 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:40.420 ... 00:36:40.420 fio-3.35 00:36:40.420 Starting 3 threads 00:36:47.097 00:36:47.097 filename0: (groupid=0, jobs=1): err= 0: pid=3042975: Wed Nov 20 11:37:38 2024 00:36:47.097 read: IOPS=144, BW=18.1MiB/s (18.9MB/s)(91.1MiB/5046msec) 00:36:47.097 slat (nsec): min=5425, max=31743, avg=6290.88, stdev=1848.04 00:36:47.097 clat (msec): min=4, max=130, avg=20.69, stdev=23.25 00:36:47.097 lat (msec): min=4, max=130, avg=20.70, stdev=23.25 00:36:47.097 clat percentiles (msec): 00:36:47.097 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 8], 00:36:47.097 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:36:47.097 | 70.00th=[ 10], 80.00th=[ 49], 90.00th=[ 51], 95.00th=[ 52], 00:36:47.097 | 99.00th=[ 91], 99.50th=[ 91], 99.90th=[ 131], 99.95th=[ 131], 00:36:47.097 | 99.99th=[ 131] 00:36:47.097 bw ( KiB/s): min=13824, max=25344, per=17.48%, avg=18611.20, stdev=3606.38, samples=10 00:36:47.098 iops : min= 108, max= 198, avg=145.40, stdev=28.17, samples=10 00:36:47.098 lat (msec) : 10=70.37%, 20=3.57%, 50=16.05%, 100=9.88%, 250=0.14% 00:36:47.098 cpu : usr=95.50%, sys=4.28%, ctx=10, majf=0, minf=54 00:36:47.098 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:47.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:47.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:47.098 issued rwts: total=729,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:47.098 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:47.098 filename0: (groupid=0, jobs=1): err= 0: pid=3042976: Wed Nov 20 11:37:38 2024 00:36:47.098 read: IOPS=336, BW=42.1MiB/s (44.2MB/s)(213MiB/5045msec) 00:36:47.098 slat (nsec): min=7901, max=32489, avg=8731.62, stdev=1157.50 00:36:47.098 clat (usec): min=4471, max=49957, avg=8866.96, stdev=6648.96 00:36:47.098 lat (usec): min=4480, max=49966, avg=8875.69, stdev=6649.06 00:36:47.098 clat percentiles (usec): 00:36:47.098 | 1.00th=[ 4817], 5.00th=[ 5669], 10.00th=[ 6063], 20.00th=[ 6652], 00:36:47.098 | 30.00th=[ 7177], 40.00th=[ 7373], 50.00th=[ 7701], 60.00th=[ 8029], 00:36:47.098 | 70.00th=[ 8455], 80.00th=[ 9110], 90.00th=[ 9896], 95.00th=[10683], 00:36:47.098 | 99.00th=[47973], 99.50th=[48497], 99.90th=[49546], 99.95th=[50070], 00:36:47.098 | 99.99th=[50070] 00:36:47.098 bw ( KiB/s): min=25856, max=48384, per=40.82%, avg=43468.80, stdev=7395.80, samples=10 00:36:47.098 iops : min= 202, max= 378, avg=339.60, stdev=57.78, samples=10 00:36:47.098 lat (msec) : 10=90.76%, 20=6.47%, 50=2.76% 00:36:47.098 cpu : usr=94.23%, sys=5.53%, ctx=6, majf=0, minf=103 00:36:47.098 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:47.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:47.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:47.098 issued rwts: total=1700,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:47.098 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:47.098 filename0: (groupid=0, jobs=1): err= 0: pid=3042977: Wed Nov 20 11:37:38 2024 00:36:47.098 read: IOPS=350, BW=43.8MiB/s (46.0MB/s)(221MiB/5045msec) 00:36:47.098 slat (nsec): min=5687, max=31660, avg=8586.56, stdev=1133.53 00:36:47.098 clat (usec): min=4535, max=87726, avg=8520.86, stdev=5731.50 00:36:47.098 lat (usec): min=4544, max=87735, avg=8529.45, stdev=5731.61 00:36:47.098 clat percentiles (usec): 00:36:47.098 | 1.00th=[ 4752], 5.00th=[ 5604], 10.00th=[ 5997], 20.00th=[ 6521], 00:36:47.098 | 30.00th=[ 7111], 40.00th=[ 7439], 50.00th=[ 7701], 60.00th=[ 8029], 00:36:47.098 | 70.00th=[ 8455], 80.00th=[ 9241], 90.00th=[ 9896], 95.00th=[10552], 00:36:47.098 | 99.00th=[47449], 99.50th=[49021], 99.90th=[53740], 99.95th=[87557], 00:36:47.098 | 99.99th=[87557] 00:36:47.098 bw ( KiB/s): min=35584, max=49408, per=42.48%, avg=45235.20, stdev=4484.75, samples=10 00:36:47.098 iops : min= 278, max= 386, avg=353.40, stdev=35.04, samples=10 00:36:47.098 lat (msec) : 10=90.90%, 20=7.35%, 50=1.53%, 100=0.23% 00:36:47.098 cpu : usr=94.81%, sys=4.96%, ctx=6, majf=0, minf=105 00:36:47.098 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:47.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:47.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:47.098 issued rwts: total=1769,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:47.098 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:47.098 00:36:47.098 Run status group 0 (all jobs): 00:36:47.098 READ: bw=104MiB/s (109MB/s), 18.1MiB/s-43.8MiB/s (18.9MB/s-46.0MB/s), io=525MiB (550MB), run=5045-5046msec 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.098 bdev_null0 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.098 [2024-11-20 11:37:38.942458] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.098 bdev_null1 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.098 11:37:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.099 11:37:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:47.099 11:37:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.099 11:37:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.099 11:37:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.099 11:37:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:47.099 11:37:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:36:47.099 11:37:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:36:47.099 11:37:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:47.099 11:37:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.099 11:37:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.099 bdev_null2 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:47.099 { 00:36:47.099 "params": { 00:36:47.099 "name": "Nvme$subsystem", 00:36:47.099 "trtype": "$TEST_TRANSPORT", 00:36:47.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:47.099 "adrfam": "ipv4", 00:36:47.099 "trsvcid": "$NVMF_PORT", 00:36:47.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:47.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:47.099 "hdgst": ${hdgst:-false}, 00:36:47.099 "ddgst": ${ddgst:-false} 00:36:47.099 }, 00:36:47.099 "method": "bdev_nvme_attach_controller" 00:36:47.099 } 00:36:47.099 EOF 00:36:47.099 )") 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:47.099 { 00:36:47.099 "params": { 00:36:47.099 "name": "Nvme$subsystem", 00:36:47.099 "trtype": "$TEST_TRANSPORT", 00:36:47.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:47.099 "adrfam": "ipv4", 00:36:47.099 "trsvcid": "$NVMF_PORT", 00:36:47.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:47.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:47.099 "hdgst": ${hdgst:-false}, 00:36:47.099 "ddgst": ${ddgst:-false} 00:36:47.099 }, 00:36:47.099 "method": "bdev_nvme_attach_controller" 00:36:47.099 } 00:36:47.099 EOF 00:36:47.099 )") 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:47.099 { 00:36:47.099 "params": { 00:36:47.099 "name": "Nvme$subsystem", 00:36:47.099 "trtype": "$TEST_TRANSPORT", 00:36:47.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:47.099 "adrfam": "ipv4", 00:36:47.099 "trsvcid": "$NVMF_PORT", 00:36:47.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:47.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:47.099 "hdgst": ${hdgst:-false}, 00:36:47.099 "ddgst": ${ddgst:-false} 00:36:47.099 }, 00:36:47.099 "method": "bdev_nvme_attach_controller" 00:36:47.099 } 00:36:47.099 EOF 00:36:47.099 )") 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:47.099 11:37:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:47.099 "params": { 00:36:47.099 "name": "Nvme0", 00:36:47.099 "trtype": "tcp", 00:36:47.099 "traddr": "10.0.0.2", 00:36:47.099 "adrfam": "ipv4", 00:36:47.099 "trsvcid": "4420", 00:36:47.099 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:47.099 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:47.099 "hdgst": false, 00:36:47.099 "ddgst": false 00:36:47.099 }, 00:36:47.099 "method": "bdev_nvme_attach_controller" 00:36:47.099 },{ 00:36:47.099 "params": { 00:36:47.099 "name": "Nvme1", 00:36:47.099 "trtype": "tcp", 00:36:47.099 "traddr": "10.0.0.2", 00:36:47.099 "adrfam": "ipv4", 00:36:47.099 "trsvcid": "4420", 00:36:47.099 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:47.099 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:47.099 "hdgst": false, 00:36:47.099 "ddgst": false 00:36:47.099 }, 00:36:47.099 "method": "bdev_nvme_attach_controller" 00:36:47.099 },{ 00:36:47.099 "params": { 00:36:47.099 "name": "Nvme2", 00:36:47.099 "trtype": "tcp", 00:36:47.099 "traddr": "10.0.0.2", 00:36:47.099 "adrfam": "ipv4", 00:36:47.099 "trsvcid": "4420", 00:36:47.099 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:47.099 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:47.099 "hdgst": false, 00:36:47.099 "ddgst": false 00:36:47.099 }, 00:36:47.099 "method": "bdev_nvme_attach_controller" 00:36:47.100 }' 00:36:47.100 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:47.100 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:47.100 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:47.100 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:47.100 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:47.100 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:47.100 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:47.100 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:47.100 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:47.100 11:37:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:47.100 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:47.100 ... 00:36:47.100 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:47.100 ... 00:36:47.100 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:47.100 ... 00:36:47.100 fio-3.35 00:36:47.100 Starting 24 threads 00:36:59.342 00:36:59.342 filename0: (groupid=0, jobs=1): err= 0: pid=3044464: Wed Nov 20 11:37:50 2024 00:36:59.342 read: IOPS=686, BW=2745KiB/s (2811kB/s)(26.9MiB/10019msec) 00:36:59.342 slat (usec): min=5, max=104, avg=18.87, stdev=13.51 00:36:59.342 clat (usec): min=4992, max=40625, avg=23154.52, stdev=3291.75 00:36:59.342 lat (usec): min=5009, max=40642, avg=23173.40, stdev=3292.64 00:36:59.342 clat percentiles (usec): 00:36:59.342 | 1.00th=[12649], 5.00th=[16319], 10.00th=[19268], 20.00th=[23200], 00:36:59.342 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:36:59.342 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[25035], 00:36:59.342 | 99.00th=[34866], 99.50th=[38011], 99.90th=[40109], 99.95th=[40109], 00:36:59.342 | 99.99th=[40633] 00:36:59.342 bw ( KiB/s): min= 2608, max= 2944, per=4.23%, avg=2744.00, stdev=93.22, samples=20 00:36:59.342 iops : min= 652, max= 736, avg=686.00, stdev=23.31, samples=20 00:36:59.342 lat (msec) : 10=0.68%, 20=9.89%, 50=89.43% 00:36:59.342 cpu : usr=98.89%, sys=0.84%, ctx=13, majf=0, minf=9 00:36:59.342 IO depths : 1=4.5%, 2=9.0%, 4=20.6%, 8=57.4%, 16=8.5%, 32=0.0%, >=64=0.0% 00:36:59.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.342 complete : 0=0.0%, 4=93.3%, 8=1.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.342 issued rwts: total=6876,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.342 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.342 filename0: (groupid=0, jobs=1): err= 0: pid=3044465: Wed Nov 20 11:37:50 2024 00:36:59.342 read: IOPS=676, BW=2706KiB/s (2771kB/s)(26.5MiB/10017msec) 00:36:59.342 slat (nsec): min=5573, max=82200, avg=18814.93, stdev=13169.47 00:36:59.342 clat (usec): min=7392, max=37735, avg=23474.01, stdev=1838.24 00:36:59.342 lat (usec): min=7405, max=37741, avg=23492.83, stdev=1838.92 00:36:59.342 clat percentiles (usec): 00:36:59.342 | 1.00th=[13960], 5.00th=[22676], 10.00th=[23200], 20.00th=[23200], 00:36:59.342 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:36:59.342 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:36:59.342 | 99.00th=[26084], 99.50th=[29754], 99.90th=[37487], 99.95th=[37487], 00:36:59.342 | 99.99th=[37487] 00:36:59.342 bw ( KiB/s): min= 2560, max= 2944, per=4.17%, avg=2704.84, stdev=79.37, samples=19 00:36:59.342 iops : min= 640, max= 736, avg=676.21, stdev=19.84, samples=19 00:36:59.342 lat (msec) : 10=0.38%, 20=2.67%, 50=96.95% 00:36:59.342 cpu : usr=98.61%, sys=0.87%, ctx=120, majf=0, minf=9 00:36:59.342 IO depths : 1=5.9%, 2=11.9%, 4=24.3%, 8=51.2%, 16=6.6%, 32=0.0%, >=64=0.0% 00:36:59.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.342 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.342 issued rwts: total=6776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.342 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.342 filename0: (groupid=0, jobs=1): err= 0: pid=3044466: Wed Nov 20 11:37:50 2024 00:36:59.342 read: IOPS=673, BW=2694KiB/s (2758kB/s)(26.4MiB/10022msec) 00:36:59.342 slat (usec): min=5, max=120, avg= 9.35, stdev= 7.39 00:36:59.342 clat (usec): min=8566, max=38483, avg=23673.64, stdev=2091.55 00:36:59.342 lat (usec): min=8574, max=38495, avg=23682.99, stdev=2090.97 00:36:59.342 clat percentiles (usec): 00:36:59.342 | 1.00th=[14746], 5.00th=[22676], 10.00th=[23200], 20.00th=[23462], 00:36:59.342 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:36:59.342 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:36:59.342 | 99.00th=[31065], 99.50th=[32375], 99.90th=[38536], 99.95th=[38536], 00:36:59.342 | 99.99th=[38536] 00:36:59.342 bw ( KiB/s): min= 2560, max= 2928, per=4.15%, avg=2694.80, stdev=63.75, samples=20 00:36:59.342 iops : min= 640, max= 732, avg=673.70, stdev=15.94, samples=20 00:36:59.342 lat (msec) : 10=0.37%, 20=3.24%, 50=96.38% 00:36:59.342 cpu : usr=98.98%, sys=0.75%, ctx=12, majf=0, minf=9 00:36:59.342 IO depths : 1=4.0%, 2=9.7%, 4=23.7%, 8=54.0%, 16=8.6%, 32=0.0%, >=64=0.0% 00:36:59.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.342 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.342 issued rwts: total=6749,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.342 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.342 filename0: (groupid=0, jobs=1): err= 0: pid=3044467: Wed Nov 20 11:37:50 2024 00:36:59.342 read: IOPS=675, BW=2703KiB/s (2768kB/s)(26.4MiB/10010msec) 00:36:59.342 slat (nsec): min=5559, max=72105, avg=13102.33, stdev=10029.15 00:36:59.342 clat (usec): min=10185, max=41176, avg=23565.15, stdev=2140.75 00:36:59.342 lat (usec): min=10191, max=41195, avg=23578.25, stdev=2141.59 00:36:59.342 clat percentiles (usec): 00:36:59.342 | 1.00th=[14353], 5.00th=[20579], 10.00th=[22938], 20.00th=[23462], 00:36:59.342 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:36:59.342 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24511], 95.00th=[25035], 00:36:59.342 | 99.00th=[31589], 99.50th=[33817], 99.90th=[35390], 99.95th=[35914], 00:36:59.342 | 99.99th=[41157] 00:36:59.342 bw ( KiB/s): min= 2512, max= 3040, per=4.16%, avg=2699.20, stdev=99.32, samples=20 00:36:59.342 iops : min= 628, max= 760, avg=674.80, stdev=24.83, samples=20 00:36:59.343 lat (msec) : 20=4.46%, 50=95.54% 00:36:59.343 cpu : usr=98.95%, sys=0.77%, ctx=10, majf=0, minf=9 00:36:59.343 IO depths : 1=4.6%, 2=10.3%, 4=23.2%, 8=53.8%, 16=8.1%, 32=0.0%, >=64=0.0% 00:36:59.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.343 complete : 0=0.0%, 4=93.7%, 8=0.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.343 issued rwts: total=6764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.343 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.343 filename0: (groupid=0, jobs=1): err= 0: pid=3044468: Wed Nov 20 11:37:50 2024 00:36:59.343 read: IOPS=687, BW=2749KiB/s (2815kB/s)(26.9MiB/10005msec) 00:36:59.343 slat (nsec): min=5444, max=71470, avg=13974.98, stdev=9501.96 00:36:59.343 clat (usec): min=4666, max=43042, avg=23179.58, stdev=3554.00 00:36:59.343 lat (usec): min=4672, max=43057, avg=23193.55, stdev=3554.61 00:36:59.343 clat percentiles (usec): 00:36:59.343 | 1.00th=[13698], 5.00th=[16188], 10.00th=[18482], 20.00th=[22676], 00:36:59.343 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:36:59.343 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[28443], 00:36:59.343 | 99.00th=[34341], 99.50th=[37487], 99.90th=[43254], 99.95th=[43254], 00:36:59.343 | 99.99th=[43254] 00:36:59.343 bw ( KiB/s): min= 2480, max= 2880, per=4.22%, avg=2739.79, stdev=86.43, samples=19 00:36:59.343 iops : min= 620, max= 720, avg=684.95, stdev=21.61, samples=19 00:36:59.343 lat (msec) : 10=0.23%, 20=13.47%, 50=86.30% 00:36:59.343 cpu : usr=98.87%, sys=0.77%, ctx=67, majf=0, minf=9 00:36:59.343 IO depths : 1=3.0%, 2=6.5%, 4=16.4%, 8=63.7%, 16=10.5%, 32=0.0%, >=64=0.0% 00:36:59.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.343 complete : 0=0.0%, 4=92.0%, 8=3.2%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.343 issued rwts: total=6875,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.343 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.343 filename0: (groupid=0, jobs=1): err= 0: pid=3044469: Wed Nov 20 11:37:50 2024 00:36:59.343 read: IOPS=683, BW=2734KiB/s (2799kB/s)(26.7MiB/10007msec) 00:36:59.343 slat (nsec): min=5564, max=76834, avg=11834.22, stdev=8866.78 00:36:59.343 clat (usec): min=9235, max=45359, avg=23333.42, stdev=2756.77 00:36:59.343 lat (usec): min=9241, max=45376, avg=23345.25, stdev=2757.44 00:36:59.343 clat percentiles (usec): 00:36:59.343 | 1.00th=[14484], 5.00th=[16909], 10.00th=[21103], 20.00th=[23200], 00:36:59.343 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:36:59.343 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24511], 95.00th=[25035], 00:36:59.343 | 99.00th=[32900], 99.50th=[35914], 99.90th=[37487], 99.95th=[45351], 00:36:59.343 | 99.99th=[45351] 00:36:59.343 bw ( KiB/s): min= 2560, max= 3056, per=4.21%, avg=2733.89, stdev=125.58, samples=19 00:36:59.343 iops : min= 640, max= 764, avg=683.47, stdev=31.40, samples=19 00:36:59.343 lat (msec) : 10=0.09%, 20=8.85%, 50=91.07% 00:36:59.343 cpu : usr=98.12%, sys=1.21%, ctx=217, majf=0, minf=9 00:36:59.343 IO depths : 1=1.9%, 2=4.0%, 4=9.8%, 8=70.6%, 16=13.6%, 32=0.0%, >=64=0.0% 00:36:59.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.343 complete : 0=0.0%, 4=88.6%, 8=8.6%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.343 issued rwts: total=6839,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.343 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.343 filename0: (groupid=0, jobs=1): err= 0: pid=3044470: Wed Nov 20 11:37:50 2024 00:36:59.343 read: IOPS=675, BW=2702KiB/s (2767kB/s)(26.4MiB/10019msec) 00:36:59.343 slat (nsec): min=5579, max=96151, avg=16058.41, stdev=12167.72 00:36:59.343 clat (usec): min=8118, max=25830, avg=23558.95, stdev=1498.43 00:36:59.343 lat (usec): min=8151, max=25837, avg=23575.01, stdev=1496.34 00:36:59.343 clat percentiles (usec): 00:36:59.343 | 1.00th=[15270], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:36:59.343 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:36:59.343 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:36:59.343 | 99.00th=[25035], 99.50th=[25560], 99.90th=[25822], 99.95th=[25822], 00:36:59.343 | 99.99th=[25822] 00:36:59.343 bw ( KiB/s): min= 2688, max= 2944, per=4.16%, avg=2700.80, stdev=57.24, samples=20 00:36:59.343 iops : min= 672, max= 736, avg=675.20, stdev=14.31, samples=20 00:36:59.343 lat (msec) : 10=0.59%, 20=0.86%, 50=98.55% 00:36:59.343 cpu : usr=99.06%, sys=0.65%, ctx=23, majf=0, minf=9 00:36:59.343 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:59.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.343 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.343 issued rwts: total=6768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.343 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.343 filename0: (groupid=0, jobs=1): err= 0: pid=3044471: Wed Nov 20 11:37:50 2024 00:36:59.343 read: IOPS=668, BW=2675KiB/s (2739kB/s)(26.1MiB/10002msec) 00:36:59.343 slat (nsec): min=5645, max=84041, avg=20670.29, stdev=12817.36 00:36:59.343 clat (usec): min=15259, max=34084, avg=23735.31, stdev=1195.62 00:36:59.343 lat (usec): min=15266, max=34094, avg=23755.98, stdev=1195.19 00:36:59.343 clat percentiles (usec): 00:36:59.343 | 1.00th=[22152], 5.00th=[22938], 10.00th=[23200], 20.00th=[23200], 00:36:59.343 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:36:59.343 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:36:59.343 | 99.00th=[28967], 99.50th=[31065], 99.90th=[33817], 99.95th=[33817], 00:36:59.343 | 99.99th=[34341] 00:36:59.343 bw ( KiB/s): min= 2560, max= 2688, per=4.11%, avg=2667.79, stdev=47.95, samples=19 00:36:59.343 iops : min= 640, max= 672, avg=666.95, stdev=11.99, samples=19 00:36:59.343 lat (msec) : 20=0.91%, 50=99.09% 00:36:59.343 cpu : usr=98.76%, sys=0.76%, ctx=118, majf=0, minf=9 00:36:59.343 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:59.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.343 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.343 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.343 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.343 filename1: (groupid=0, jobs=1): err= 0: pid=3044472: Wed Nov 20 11:37:50 2024 00:36:59.343 read: IOPS=679, BW=2720KiB/s (2785kB/s)(26.6MiB/10001msec) 00:36:59.343 slat (usec): min=5, max=102, avg= 9.82, stdev= 7.40 00:36:59.343 clat (usec): min=8369, max=32633, avg=23449.81, stdev=1826.38 00:36:59.343 lat (usec): min=8396, max=32640, avg=23459.63, stdev=1824.75 00:36:59.343 clat percentiles (usec): 00:36:59.343 | 1.00th=[13435], 5.00th=[22676], 10.00th=[23200], 20.00th=[23462], 00:36:59.343 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:36:59.343 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:36:59.343 | 99.00th=[25035], 99.50th=[25560], 99.90th=[25822], 99.95th=[30016], 00:36:59.343 | 99.99th=[32637] 00:36:59.343 bw ( KiB/s): min= 2688, max= 2944, per=4.18%, avg=2714.95, stdev=68.52, samples=19 00:36:59.343 iops : min= 672, max= 736, avg=678.74, stdev=17.13, samples=19 00:36:59.343 lat (msec) : 10=0.51%, 20=2.84%, 50=96.65% 00:36:59.343 cpu : usr=98.73%, sys=0.88%, ctx=100, majf=0, minf=9 00:36:59.343 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:59.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.343 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.343 issued rwts: total=6800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.343 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.343 filename1: (groupid=0, jobs=1): err= 0: pid=3044473: Wed Nov 20 11:37:50 2024 00:36:59.343 read: IOPS=671, BW=2686KiB/s (2751kB/s)(26.2MiB/10006msec) 00:36:59.343 slat (nsec): min=5751, max=78370, avg=20875.58, stdev=11829.98 00:36:59.343 clat (usec): min=14376, max=26083, avg=23637.83, stdev=701.52 00:36:59.343 lat (usec): min=14383, max=26093, avg=23658.71, stdev=701.40 00:36:59.343 clat percentiles (usec): 00:36:59.343 | 1.00th=[22414], 5.00th=[22938], 10.00th=[23200], 20.00th=[23200], 00:36:59.343 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:36:59.343 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:36:59.343 | 99.00th=[25035], 99.50th=[25297], 99.90th=[25822], 99.95th=[26084], 00:36:59.343 | 99.99th=[26084] 00:36:59.343 bw ( KiB/s): min= 2560, max= 2688, per=4.13%, avg=2681.26, stdev=29.37, samples=19 00:36:59.343 iops : min= 640, max= 672, avg=670.32, stdev= 7.34, samples=19 00:36:59.343 lat (msec) : 20=0.48%, 50=99.52% 00:36:59.343 cpu : usr=98.91%, sys=0.80%, ctx=24, majf=0, minf=9 00:36:59.343 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:59.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.343 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.343 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.343 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.343 filename1: (groupid=0, jobs=1): err= 0: pid=3044474: Wed Nov 20 11:37:50 2024 00:36:59.343 read: IOPS=677, BW=2708KiB/s (2773kB/s)(26.5MiB/10017msec) 00:36:59.343 slat (nsec): min=5567, max=71151, avg=13271.80, stdev=9544.16 00:36:59.343 clat (usec): min=6801, max=38785, avg=23515.50, stdev=1872.89 00:36:59.343 lat (usec): min=6816, max=38806, avg=23528.78, stdev=1873.26 00:36:59.343 clat percentiles (usec): 00:36:59.343 | 1.00th=[15139], 5.00th=[22676], 10.00th=[23200], 20.00th=[23462], 00:36:59.343 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:36:59.343 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24773], 00:36:59.343 | 99.00th=[25560], 99.50th=[30278], 99.90th=[38536], 99.95th=[38536], 00:36:59.343 | 99.99th=[38536] 00:36:59.343 bw ( KiB/s): min= 2672, max= 2960, per=4.17%, avg=2707.37, stdev=66.52, samples=19 00:36:59.343 iops : min= 668, max= 740, avg=676.84, stdev=16.63, samples=19 00:36:59.343 lat (msec) : 10=0.27%, 20=3.18%, 50=96.55% 00:36:59.343 cpu : usr=98.79%, sys=0.87%, ctx=118, majf=0, minf=9 00:36:59.343 IO depths : 1=5.7%, 2=11.6%, 4=24.0%, 8=51.9%, 16=6.8%, 32=0.0%, >=64=0.0% 00:36:59.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.343 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.343 issued rwts: total=6782,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.343 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.343 filename1: (groupid=0, jobs=1): err= 0: pid=3044475: Wed Nov 20 11:37:50 2024 00:36:59.343 read: IOPS=666, BW=2667KiB/s (2731kB/s)(26.1MiB/10007msec) 00:36:59.343 slat (nsec): min=4590, max=87510, avg=16666.82, stdev=13296.87 00:36:59.343 clat (usec): min=9389, max=40507, avg=23900.38, stdev=3762.52 00:36:59.344 lat (usec): min=9395, max=40520, avg=23917.05, stdev=3763.19 00:36:59.344 clat percentiles (usec): 00:36:59.344 | 1.00th=[14746], 5.00th=[17957], 10.00th=[19268], 20.00th=[22938], 00:36:59.344 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:36:59.344 | 70.00th=[23987], 80.00th=[24511], 90.00th=[28443], 95.00th=[31589], 00:36:59.344 | 99.00th=[36439], 99.50th=[38011], 99.90th=[39060], 99.95th=[40633], 00:36:59.344 | 99.99th=[40633] 00:36:59.344 bw ( KiB/s): min= 2432, max= 2848, per=4.09%, avg=2656.84, stdev=112.47, samples=19 00:36:59.344 iops : min= 608, max= 712, avg=664.21, stdev=28.12, samples=19 00:36:59.344 lat (msec) : 10=0.18%, 20=12.41%, 50=87.41% 00:36:59.344 cpu : usr=98.48%, sys=1.13%, ctx=53, majf=0, minf=9 00:36:59.344 IO depths : 1=1.3%, 2=2.6%, 4=8.1%, 8=74.0%, 16=14.0%, 32=0.0%, >=64=0.0% 00:36:59.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.344 complete : 0=0.0%, 4=90.2%, 8=6.9%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.344 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.344 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.344 filename1: (groupid=0, jobs=1): err= 0: pid=3044476: Wed Nov 20 11:37:50 2024 00:36:59.344 read: IOPS=682, BW=2730KiB/s (2796kB/s)(26.7MiB/10004msec) 00:36:59.344 slat (nsec): min=5391, max=81302, avg=16534.64, stdev=13687.26 00:36:59.344 clat (usec): min=5714, max=62931, avg=23329.95, stdev=3508.36 00:36:59.344 lat (usec): min=5719, max=62947, avg=23346.49, stdev=3508.82 00:36:59.344 clat percentiles (usec): 00:36:59.344 | 1.00th=[14484], 5.00th=[16319], 10.00th=[19268], 20.00th=[22938], 00:36:59.344 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:36:59.344 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25560], 95.00th=[28705], 00:36:59.344 | 99.00th=[33424], 99.50th=[35390], 99.90th=[50594], 99.95th=[50594], 00:36:59.344 | 99.99th=[63177] 00:36:59.344 bw ( KiB/s): min= 2528, max= 2896, per=4.20%, avg=2724.21, stdev=97.19, samples=19 00:36:59.344 iops : min= 632, max= 724, avg=681.05, stdev=24.30, samples=19 00:36:59.344 lat (msec) : 10=0.06%, 20=13.08%, 50=86.63%, 100=0.23% 00:36:59.344 cpu : usr=98.73%, sys=0.82%, ctx=53, majf=0, minf=9 00:36:59.344 IO depths : 1=1.7%, 2=3.9%, 4=10.5%, 8=70.7%, 16=13.2%, 32=0.0%, >=64=0.0% 00:36:59.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.344 complete : 0=0.0%, 4=90.9%, 8=5.8%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.344 issued rwts: total=6828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.344 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.344 filename1: (groupid=0, jobs=1): err= 0: pid=3044477: Wed Nov 20 11:37:50 2024 00:36:59.344 read: IOPS=675, BW=2702KiB/s (2767kB/s)(26.4MiB/10019msec) 00:36:59.344 slat (nsec): min=5646, max=79340, avg=19009.11, stdev=13312.34 00:36:59.344 clat (usec): min=7830, max=26044, avg=23529.25, stdev=1561.25 00:36:59.344 lat (usec): min=7859, max=26053, avg=23548.26, stdev=1560.35 00:36:59.344 clat percentiles (usec): 00:36:59.344 | 1.00th=[12649], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:36:59.344 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:36:59.344 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:36:59.344 | 99.00th=[25297], 99.50th=[25297], 99.90th=[26084], 99.95th=[26084], 00:36:59.344 | 99.99th=[26084] 00:36:59.344 bw ( KiB/s): min= 2560, max= 2944, per=4.16%, avg=2700.80, stdev=70.72, samples=20 00:36:59.344 iops : min= 640, max= 736, avg=675.20, stdev=17.68, samples=20 00:36:59.344 lat (msec) : 10=0.55%, 20=0.87%, 50=98.58% 00:36:59.344 cpu : usr=98.95%, sys=0.74%, ctx=25, majf=0, minf=9 00:36:59.344 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:59.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.344 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.344 issued rwts: total=6768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.344 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.344 filename1: (groupid=0, jobs=1): err= 0: pid=3044478: Wed Nov 20 11:37:50 2024 00:36:59.344 read: IOPS=670, BW=2684KiB/s (2748kB/s)(26.2MiB/10013msec) 00:36:59.344 slat (nsec): min=5573, max=69634, avg=17149.51, stdev=10971.15 00:36:59.344 clat (usec): min=12574, max=33715, avg=23691.99, stdev=1032.07 00:36:59.344 lat (usec): min=12580, max=33722, avg=23709.14, stdev=1031.90 00:36:59.344 clat percentiles (usec): 00:36:59.344 | 1.00th=[22676], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:36:59.344 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:36:59.344 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:36:59.344 | 99.00th=[25297], 99.50th=[25822], 99.90th=[32900], 99.95th=[32900], 00:36:59.344 | 99.99th=[33817] 00:36:59.344 bw ( KiB/s): min= 2560, max= 2793, per=4.13%, avg=2680.45, stdev=47.66, samples=20 00:36:59.344 iops : min= 640, max= 698, avg=670.10, stdev=11.88, samples=20 00:36:59.344 lat (msec) : 20=0.65%, 50=99.35% 00:36:59.344 cpu : usr=98.84%, sys=0.71%, ctx=61, majf=0, minf=9 00:36:59.344 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:59.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.344 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.344 issued rwts: total=6718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.344 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.344 filename1: (groupid=0, jobs=1): err= 0: pid=3044479: Wed Nov 20 11:37:50 2024 00:36:59.344 read: IOPS=689, BW=2756KiB/s (2822kB/s)(26.9MiB/10005msec) 00:36:59.344 slat (nsec): min=5049, max=89085, avg=15757.85, stdev=12677.79 00:36:59.344 clat (usec): min=4977, max=39715, avg=23109.71, stdev=3819.82 00:36:59.344 lat (usec): min=4983, max=39732, avg=23125.46, stdev=3821.11 00:36:59.344 clat percentiles (usec): 00:36:59.344 | 1.00th=[13566], 5.00th=[15926], 10.00th=[17957], 20.00th=[20579], 00:36:59.344 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:36:59.344 | 70.00th=[23987], 80.00th=[24249], 90.00th=[26346], 95.00th=[29492], 00:36:59.344 | 99.00th=[35914], 99.50th=[38011], 99.90th=[39584], 99.95th=[39584], 00:36:59.344 | 99.99th=[39584] 00:36:59.344 bw ( KiB/s): min= 2549, max= 3024, per=4.23%, avg=2746.37, stdev=127.14, samples=19 00:36:59.344 iops : min= 637, max= 756, avg=686.58, stdev=31.81, samples=19 00:36:59.344 lat (msec) : 10=0.09%, 20=16.87%, 50=83.04% 00:36:59.344 cpu : usr=98.70%, sys=0.91%, ctx=76, majf=0, minf=9 00:36:59.344 IO depths : 1=2.2%, 2=4.5%, 4=11.4%, 8=69.9%, 16=12.0%, 32=0.0%, >=64=0.0% 00:36:59.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.344 complete : 0=0.0%, 4=90.7%, 8=5.4%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.344 issued rwts: total=6894,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.344 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.344 filename2: (groupid=0, jobs=1): err= 0: pid=3044480: Wed Nov 20 11:37:50 2024 00:36:59.344 read: IOPS=674, BW=2696KiB/s (2761kB/s)(26.4MiB/10017msec) 00:36:59.344 slat (nsec): min=5601, max=81196, avg=18733.13, stdev=13853.52 00:36:59.344 clat (usec): min=8360, max=29810, avg=23579.38, stdev=1269.52 00:36:59.344 lat (usec): min=8372, max=29818, avg=23598.11, stdev=1268.97 00:36:59.344 clat percentiles (usec): 00:36:59.344 | 1.00th=[18482], 5.00th=[22938], 10.00th=[23200], 20.00th=[23200], 00:36:59.344 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:36:59.344 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:36:59.344 | 99.00th=[25035], 99.50th=[25297], 99.90th=[26084], 99.95th=[26084], 00:36:59.344 | 99.99th=[29754] 00:36:59.344 bw ( KiB/s): min= 2560, max= 2944, per=4.15%, avg=2694.74, stdev=67.11, samples=19 00:36:59.344 iops : min= 640, max= 736, avg=673.68, stdev=16.78, samples=19 00:36:59.344 lat (msec) : 10=0.24%, 20=0.98%, 50=98.79% 00:36:59.344 cpu : usr=99.09%, sys=0.62%, ctx=13, majf=0, minf=9 00:36:59.344 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:59.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.344 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.344 issued rwts: total=6752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.344 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.344 filename2: (groupid=0, jobs=1): err= 0: pid=3044481: Wed Nov 20 11:37:50 2024 00:36:59.344 read: IOPS=673, BW=2694KiB/s (2759kB/s)(26.3MiB/10005msec) 00:36:59.344 slat (nsec): min=5558, max=79819, avg=16487.82, stdev=11462.36 00:36:59.344 clat (usec): min=4395, max=43309, avg=23615.08, stdev=2456.63 00:36:59.344 lat (usec): min=4409, max=43324, avg=23631.57, stdev=2457.23 00:36:59.344 clat percentiles (usec): 00:36:59.344 | 1.00th=[14484], 5.00th=[21890], 10.00th=[22938], 20.00th=[23200], 00:36:59.344 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:36:59.344 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24511], 95.00th=[25035], 00:36:59.344 | 99.00th=[31589], 99.50th=[34866], 99.90th=[43254], 99.95th=[43254], 00:36:59.344 | 99.99th=[43254] 00:36:59.344 bw ( KiB/s): min= 2560, max= 2784, per=4.13%, avg=2682.11, stdev=49.23, samples=19 00:36:59.344 iops : min= 640, max= 696, avg=670.53, stdev=12.31, samples=19 00:36:59.344 lat (msec) : 10=0.30%, 20=3.74%, 50=95.96% 00:36:59.344 cpu : usr=98.67%, sys=0.74%, ctx=181, majf=0, minf=9 00:36:59.344 IO depths : 1=4.0%, 2=8.6%, 4=19.5%, 8=58.6%, 16=9.3%, 32=0.0%, >=64=0.0% 00:36:59.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.344 complete : 0=0.0%, 4=92.9%, 8=2.1%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.344 issued rwts: total=6738,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.344 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.344 filename2: (groupid=0, jobs=1): err= 0: pid=3044482: Wed Nov 20 11:37:50 2024 00:36:59.344 read: IOPS=676, BW=2707KiB/s (2772kB/s)(26.4MiB/10002msec) 00:36:59.344 slat (nsec): min=5615, max=82783, avg=11191.45, stdev=8710.07 00:36:59.344 clat (usec): min=7648, max=33208, avg=23553.00, stdev=1820.36 00:36:59.344 lat (usec): min=7689, max=33215, avg=23564.19, stdev=1819.02 00:36:59.344 clat percentiles (usec): 00:36:59.344 | 1.00th=[12256], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:36:59.344 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:36:59.344 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24773], 00:36:59.344 | 99.00th=[25297], 99.50th=[25560], 99.90th=[32113], 99.95th=[32900], 00:36:59.344 | 99.99th=[33162] 00:36:59.344 bw ( KiB/s): min= 2560, max= 2944, per=4.16%, avg=2701.47, stdev=93.34, samples=19 00:36:59.344 iops : min= 640, max= 736, avg=675.37, stdev=23.33, samples=19 00:36:59.344 lat (msec) : 10=0.69%, 20=1.55%, 50=97.75% 00:36:59.344 cpu : usr=99.04%, sys=0.68%, ctx=10, majf=0, minf=9 00:36:59.344 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:36:59.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.345 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.345 issued rwts: total=6768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.345 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.345 filename2: (groupid=0, jobs=1): err= 0: pid=3044483: Wed Nov 20 11:37:50 2024 00:36:59.345 read: IOPS=675, BW=2702KiB/s (2767kB/s)(26.4MiB/10005msec) 00:36:59.345 slat (nsec): min=4794, max=87683, avg=22265.48, stdev=15275.54 00:36:59.345 clat (usec): min=10953, max=39748, avg=23467.04, stdev=2004.62 00:36:59.345 lat (usec): min=10959, max=39765, avg=23489.30, stdev=2005.75 00:36:59.345 clat percentiles (usec): 00:36:59.345 | 1.00th=[15401], 5.00th=[22414], 10.00th=[22938], 20.00th=[23200], 00:36:59.345 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:36:59.345 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:36:59.345 | 99.00th=[30540], 99.50th=[33424], 99.90th=[39584], 99.95th=[39584], 00:36:59.345 | 99.99th=[39584] 00:36:59.345 bw ( KiB/s): min= 2560, max= 3008, per=4.15%, avg=2690.79, stdev=86.45, samples=19 00:36:59.345 iops : min= 640, max= 752, avg=672.68, stdev=21.63, samples=19 00:36:59.345 lat (msec) : 20=3.31%, 50=96.69% 00:36:59.345 cpu : usr=98.74%, sys=0.82%, ctx=66, majf=0, minf=9 00:36:59.345 IO depths : 1=5.2%, 2=11.0%, 4=23.6%, 8=52.8%, 16=7.4%, 32=0.0%, >=64=0.0% 00:36:59.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.345 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.345 issued rwts: total=6758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.345 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.345 filename2: (groupid=0, jobs=1): err= 0: pid=3044484: Wed Nov 20 11:37:50 2024 00:36:59.345 read: IOPS=671, BW=2688KiB/s (2752kB/s)(26.2MiB/10001msec) 00:36:59.345 slat (nsec): min=5594, max=78641, avg=17022.98, stdev=13090.54 00:36:59.345 clat (usec): min=13658, max=33938, avg=23659.81, stdev=835.52 00:36:59.345 lat (usec): min=13666, max=33953, avg=23676.83, stdev=834.74 00:36:59.345 clat percentiles (usec): 00:36:59.345 | 1.00th=[21890], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:36:59.345 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:36:59.345 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:36:59.345 | 99.00th=[25297], 99.50th=[25560], 99.90th=[31851], 99.95th=[32900], 00:36:59.345 | 99.99th=[33817] 00:36:59.345 bw ( KiB/s): min= 2560, max= 2816, per=4.13%, avg=2681.26, stdev=51.80, samples=19 00:36:59.345 iops : min= 640, max= 704, avg=670.32, stdev=12.95, samples=19 00:36:59.345 lat (msec) : 20=0.62%, 50=99.38% 00:36:59.345 cpu : usr=98.65%, sys=0.81%, ctx=126, majf=0, minf=9 00:36:59.345 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:59.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.345 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.345 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.345 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.345 filename2: (groupid=0, jobs=1): err= 0: pid=3044485: Wed Nov 20 11:37:50 2024 00:36:59.345 read: IOPS=670, BW=2683KiB/s (2748kB/s)(26.2MiB/10005msec) 00:36:59.345 slat (nsec): min=5456, max=92493, avg=16199.50, stdev=12514.80 00:36:59.345 clat (usec): min=4408, max=55533, avg=23765.91, stdev=3221.74 00:36:59.345 lat (usec): min=4414, max=55552, avg=23782.11, stdev=3221.94 00:36:59.345 clat percentiles (usec): 00:36:59.345 | 1.00th=[13304], 5.00th=[18744], 10.00th=[22414], 20.00th=[23200], 00:36:59.345 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:36:59.345 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25035], 95.00th=[29492], 00:36:59.345 | 99.00th=[34341], 99.50th=[36963], 99.90th=[42730], 99.95th=[42730], 00:36:59.345 | 99.99th=[55313] 00:36:59.345 bw ( KiB/s): min= 2480, max= 2880, per=4.13%, avg=2680.80, stdev=91.76, samples=20 00:36:59.345 iops : min= 620, max= 720, avg=670.20, stdev=22.94, samples=20 00:36:59.345 lat (msec) : 10=0.39%, 20=7.02%, 50=92.55%, 100=0.04% 00:36:59.345 cpu : usr=98.66%, sys=0.96%, ctx=79, majf=0, minf=9 00:36:59.345 IO depths : 1=0.1%, 2=1.4%, 4=7.4%, 8=75.0%, 16=16.2%, 32=0.0%, >=64=0.0% 00:36:59.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.345 complete : 0=0.0%, 4=90.6%, 8=7.2%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.345 issued rwts: total=6712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.345 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.345 filename2: (groupid=0, jobs=1): err= 0: pid=3044486: Wed Nov 20 11:37:50 2024 00:36:59.345 read: IOPS=690, BW=2763KiB/s (2829kB/s)(27.0MiB/10004msec) 00:36:59.345 slat (nsec): min=5562, max=81994, avg=18857.48, stdev=13879.32 00:36:59.345 clat (usec): min=4470, max=55114, avg=23009.77, stdev=3030.04 00:36:59.345 lat (usec): min=4476, max=55129, avg=23028.63, stdev=3031.74 00:36:59.345 clat percentiles (usec): 00:36:59.345 | 1.00th=[13042], 5.00th=[16057], 10.00th=[19792], 20.00th=[23200], 00:36:59.345 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:36:59.345 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[24773], 00:36:59.345 | 99.00th=[31327], 99.50th=[35914], 99.90th=[42730], 99.95th=[42730], 00:36:59.345 | 99.99th=[55313] 00:36:59.345 bw ( KiB/s): min= 2656, max= 3088, per=4.24%, avg=2754.53, stdev=106.97, samples=19 00:36:59.345 iops : min= 664, max= 772, avg=688.63, stdev=26.74, samples=19 00:36:59.345 lat (msec) : 10=0.23%, 20=10.78%, 50=88.96%, 100=0.03% 00:36:59.345 cpu : usr=98.72%, sys=0.84%, ctx=73, majf=0, minf=9 00:36:59.345 IO depths : 1=3.2%, 2=7.3%, 4=17.0%, 8=61.9%, 16=10.7%, 32=0.0%, >=64=0.0% 00:36:59.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.345 complete : 0=0.0%, 4=92.3%, 8=3.3%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.345 issued rwts: total=6910,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.345 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.345 filename2: (groupid=0, jobs=1): err= 0: pid=3044487: Wed Nov 20 11:37:50 2024 00:36:59.345 read: IOPS=671, BW=2685KiB/s (2750kB/s)(26.2MiB/10010msec) 00:36:59.345 slat (nsec): min=5630, max=75254, avg=16182.54, stdev=10184.26 00:36:59.345 clat (usec): min=12423, max=35510, avg=23689.69, stdev=951.42 00:36:59.345 lat (usec): min=12436, max=35527, avg=23705.87, stdev=951.34 00:36:59.345 clat percentiles (usec): 00:36:59.345 | 1.00th=[22676], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:36:59.345 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:36:59.345 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:36:59.345 | 99.00th=[25297], 99.50th=[25560], 99.90th=[30802], 99.95th=[33162], 00:36:59.345 | 99.99th=[35390] 00:36:59.345 bw ( KiB/s): min= 2560, max= 2816, per=4.13%, avg=2681.60, stdev=50.44, samples=20 00:36:59.345 iops : min= 640, max= 704, avg=670.40, stdev=12.61, samples=20 00:36:59.345 lat (msec) : 20=0.62%, 50=99.38% 00:36:59.345 cpu : usr=98.55%, sys=0.88%, ctx=198, majf=0, minf=9 00:36:59.345 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:59.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.345 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.345 issued rwts: total=6720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.345 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:59.345 00:36:59.345 Run status group 0 (all jobs): 00:36:59.345 READ: bw=63.4MiB/s (66.5MB/s), 2667KiB/s-2763KiB/s (2731kB/s-2829kB/s), io=635MiB (666MB), run=10001-10022msec 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:59.345 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:59.346 bdev_null0 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:59.346 [2024-11-20 11:37:50.948822] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:59.346 bdev_null1 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.346 11:37:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:59.346 { 00:36:59.346 "params": { 00:36:59.346 "name": "Nvme$subsystem", 00:36:59.346 "trtype": "$TEST_TRANSPORT", 00:36:59.346 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:59.346 "adrfam": "ipv4", 00:36:59.346 "trsvcid": "$NVMF_PORT", 00:36:59.346 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:59.346 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:59.346 "hdgst": ${hdgst:-false}, 00:36:59.346 "ddgst": ${ddgst:-false} 00:36:59.346 }, 00:36:59.346 "method": "bdev_nvme_attach_controller" 00:36:59.346 } 00:36:59.346 EOF 00:36:59.346 )") 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:59.346 { 00:36:59.346 "params": { 00:36:59.346 "name": "Nvme$subsystem", 00:36:59.346 "trtype": "$TEST_TRANSPORT", 00:36:59.346 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:59.346 "adrfam": "ipv4", 00:36:59.346 "trsvcid": "$NVMF_PORT", 00:36:59.346 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:59.346 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:59.346 "hdgst": ${hdgst:-false}, 00:36:59.346 "ddgst": ${ddgst:-false} 00:36:59.346 }, 00:36:59.346 "method": "bdev_nvme_attach_controller" 00:36:59.346 } 00:36:59.346 EOF 00:36:59.346 )") 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:59.346 "params": { 00:36:59.346 "name": "Nvme0", 00:36:59.346 "trtype": "tcp", 00:36:59.346 "traddr": "10.0.0.2", 00:36:59.346 "adrfam": "ipv4", 00:36:59.346 "trsvcid": "4420", 00:36:59.346 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:59.346 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:59.346 "hdgst": false, 00:36:59.346 "ddgst": false 00:36:59.346 }, 00:36:59.346 "method": "bdev_nvme_attach_controller" 00:36:59.346 },{ 00:36:59.346 "params": { 00:36:59.346 "name": "Nvme1", 00:36:59.346 "trtype": "tcp", 00:36:59.346 "traddr": "10.0.0.2", 00:36:59.346 "adrfam": "ipv4", 00:36:59.346 "trsvcid": "4420", 00:36:59.346 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:59.346 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:59.346 "hdgst": false, 00:36:59.346 "ddgst": false 00:36:59.346 }, 00:36:59.346 "method": "bdev_nvme_attach_controller" 00:36:59.346 }' 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:59.346 11:37:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:59.347 11:37:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:59.347 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:59.347 ... 00:36:59.347 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:59.347 ... 00:36:59.347 fio-3.35 00:36:59.347 Starting 4 threads 00:37:04.639 00:37:04.639 filename0: (groupid=0, jobs=1): err= 0: pid=3046871: Wed Nov 20 11:37:56 2024 00:37:04.639 read: IOPS=2940, BW=23.0MiB/s (24.1MB/s)(115MiB/5002msec) 00:37:04.639 slat (nsec): min=5396, max=35548, avg=6028.22, stdev=1371.89 00:37:04.639 clat (usec): min=1191, max=43811, avg=2704.81, stdev=990.48 00:37:04.639 lat (usec): min=1196, max=43847, avg=2710.83, stdev=990.68 00:37:04.639 clat percentiles (usec): 00:37:04.639 | 1.00th=[ 2040], 5.00th=[ 2376], 10.00th=[ 2442], 20.00th=[ 2573], 00:37:04.639 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:37:04.639 | 70.00th=[ 2704], 80.00th=[ 2737], 90.00th=[ 2900], 95.00th=[ 2966], 00:37:04.639 | 99.00th=[ 3884], 99.50th=[ 4146], 99.90th=[ 4359], 99.95th=[43779], 00:37:04.639 | 99.99th=[43779] 00:37:04.639 bw ( KiB/s): min=21888, max=23888, per=24.73%, avg=23553.78, stdev=633.49, samples=9 00:37:04.639 iops : min= 2736, max= 2986, avg=2944.22, stdev=79.19, samples=9 00:37:04.639 lat (msec) : 2=0.63%, 4=98.69%, 10=0.62%, 50=0.05% 00:37:04.639 cpu : usr=96.12%, sys=3.66%, ctx=8, majf=0, minf=88 00:37:04.639 IO depths : 1=0.1%, 2=0.2%, 4=71.0%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:04.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:04.640 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:04.640 issued rwts: total=14708,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:04.640 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:04.640 filename0: (groupid=0, jobs=1): err= 0: pid=3046872: Wed Nov 20 11:37:56 2024 00:37:04.640 read: IOPS=2969, BW=23.2MiB/s (24.3MB/s)(116MiB/5001msec) 00:37:04.640 slat (nsec): min=5448, max=40966, avg=8659.50, stdev=2057.34 00:37:04.640 clat (usec): min=889, max=4787, avg=2671.16, stdev=245.12 00:37:04.640 lat (usec): min=898, max=4795, avg=2679.82, stdev=245.03 00:37:04.640 clat percentiles (usec): 00:37:04.640 | 1.00th=[ 1991], 5.00th=[ 2343], 10.00th=[ 2442], 20.00th=[ 2573], 00:37:04.640 | 30.00th=[ 2638], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2671], 00:37:04.640 | 70.00th=[ 2671], 80.00th=[ 2737], 90.00th=[ 2900], 95.00th=[ 2999], 00:37:04.640 | 99.00th=[ 3589], 99.50th=[ 3884], 99.90th=[ 4293], 99.95th=[ 4490], 00:37:04.640 | 99.99th=[ 4752] 00:37:04.640 bw ( KiB/s): min=23632, max=24080, per=24.96%, avg=23777.78, stdev=162.86, samples=9 00:37:04.640 iops : min= 2954, max= 3010, avg=2972.22, stdev=20.36, samples=9 00:37:04.640 lat (usec) : 1000=0.02% 00:37:04.640 lat (msec) : 2=1.01%, 4=98.60%, 10=0.37% 00:37:04.640 cpu : usr=96.44%, sys=3.32%, ctx=6, majf=0, minf=65 00:37:04.640 IO depths : 1=0.1%, 2=0.1%, 4=71.4%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:04.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:04.640 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:04.640 issued rwts: total=14852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:04.640 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:04.640 filename1: (groupid=0, jobs=1): err= 0: pid=3046873: Wed Nov 20 11:37:56 2024 00:37:04.640 read: IOPS=3046, BW=23.8MiB/s (25.0MB/s)(119MiB/5002msec) 00:37:04.640 slat (nsec): min=5393, max=45262, avg=6292.54, stdev=2332.92 00:37:04.640 clat (usec): min=1170, max=4638, avg=2609.59, stdev=356.20 00:37:04.640 lat (usec): min=1176, max=4653, avg=2615.88, stdev=356.22 00:37:04.640 clat percentiles (usec): 00:37:04.640 | 1.00th=[ 1795], 5.00th=[ 2008], 10.00th=[ 2180], 20.00th=[ 2376], 00:37:04.640 | 30.00th=[ 2474], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2671], 00:37:04.640 | 70.00th=[ 2671], 80.00th=[ 2671], 90.00th=[ 2999], 95.00th=[ 3326], 00:37:04.640 | 99.00th=[ 3752], 99.50th=[ 3982], 99.90th=[ 4113], 99.95th=[ 4228], 00:37:04.640 | 99.99th=[ 4621] 00:37:04.640 bw ( KiB/s): min=23600, max=24816, per=25.53%, avg=24316.44, stdev=385.81, samples=9 00:37:04.640 iops : min= 2950, max= 3102, avg=3039.56, stdev=48.23, samples=9 00:37:04.640 lat (msec) : 2=4.18%, 4=95.66%, 10=0.16% 00:37:04.640 cpu : usr=95.28%, sys=3.36%, ctx=202, majf=0, minf=94 00:37:04.640 IO depths : 1=0.1%, 2=0.3%, 4=70.8%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:04.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:04.640 complete : 0=0.0%, 4=93.6%, 8=6.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:04.640 issued rwts: total=15238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:04.640 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:04.640 filename1: (groupid=0, jobs=1): err= 0: pid=3046874: Wed Nov 20 11:37:56 2024 00:37:04.640 read: IOPS=2952, BW=23.1MiB/s (24.2MB/s)(115MiB/5001msec) 00:37:04.640 slat (nsec): min=7860, max=36310, avg=8539.23, stdev=1826.86 00:37:04.640 clat (usec): min=945, max=4756, avg=2687.12, stdev=222.30 00:37:04.640 lat (usec): min=953, max=4764, avg=2695.66, stdev=222.26 00:37:04.640 clat percentiles (usec): 00:37:04.640 | 1.00th=[ 2180], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2606], 00:37:04.640 | 30.00th=[ 2638], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2671], 00:37:04.640 | 70.00th=[ 2671], 80.00th=[ 2737], 90.00th=[ 2900], 95.00th=[ 2966], 00:37:04.640 | 99.00th=[ 3556], 99.50th=[ 3916], 99.90th=[ 4359], 99.95th=[ 4621], 00:37:04.640 | 99.99th=[ 4752] 00:37:04.640 bw ( KiB/s): min=23376, max=23984, per=24.82%, avg=23646.11, stdev=184.75, samples=9 00:37:04.640 iops : min= 2922, max= 2998, avg=2955.67, stdev=23.08, samples=9 00:37:04.640 lat (usec) : 1000=0.03% 00:37:04.640 lat (msec) : 2=0.44%, 4=99.13%, 10=0.39% 00:37:04.640 cpu : usr=96.76%, sys=3.00%, ctx=7, majf=0, minf=67 00:37:04.640 IO depths : 1=0.1%, 2=0.1%, 4=72.7%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:04.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:04.640 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:04.640 issued rwts: total=14763,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:04.640 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:04.640 00:37:04.640 Run status group 0 (all jobs): 00:37:04.640 READ: bw=93.0MiB/s (97.5MB/s), 23.0MiB/s-23.8MiB/s (24.1MB/s-25.0MB/s), io=465MiB (488MB), run=5001-5002msec 00:37:04.640 11:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:04.640 11:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:04.640 11:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:04.640 11:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:04.640 11:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:04.640 11:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:04.640 11:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.640 11:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:04.640 11:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.640 11:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:04.640 11:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.640 11:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:04.640 11:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.640 11:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:04.640 11:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:04.640 11:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:04.640 11:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:04.640 11:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.640 11:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:04.640 11:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.640 11:37:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:04.640 11:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.640 11:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:04.640 11:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.640 00:37:04.640 real 0m24.623s 00:37:04.640 user 5m17.172s 00:37:04.640 sys 0m4.632s 00:37:04.640 11:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:04.640 11:37:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:04.640 ************************************ 00:37:04.640 END TEST fio_dif_rand_params 00:37:04.640 ************************************ 00:37:04.640 11:37:57 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:04.640 11:37:57 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:04.640 11:37:57 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:04.640 11:37:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:04.640 ************************************ 00:37:04.640 START TEST fio_dif_digest 00:37:04.640 ************************************ 00:37:04.640 11:37:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:37:04.640 11:37:57 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:37:04.640 11:37:57 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:04.640 11:37:57 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:37:04.640 11:37:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:37:04.640 11:37:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:04.640 11:37:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:37:04.640 11:37:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:37:04.640 11:37:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:37:04.640 11:37:57 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:37:04.640 11:37:57 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:37:04.640 11:37:57 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:37:04.640 11:37:57 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:37:04.640 11:37:57 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:37:04.640 11:37:57 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:37:04.640 11:37:57 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:37:04.640 11:37:57 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:04.640 11:37:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.640 11:37:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:04.640 bdev_null0 00:37:04.640 11:37:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.640 11:37:57 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:04.640 11:37:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.640 11:37:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:04.640 11:37:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.640 11:37:57 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:04.640 11:37:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.640 11:37:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:04.640 11:37:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.640 11:37:57 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:04.640 11:37:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.640 11:37:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:04.640 [2024-11-20 11:37:57.320353] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:04.640 11:37:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.641 11:37:57 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:04.641 11:37:57 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:04.641 11:37:57 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:04.641 11:37:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:37:04.641 11:37:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:37:04.641 11:37:57 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:04.641 11:37:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:04.641 11:37:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:04.641 11:37:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:04.641 { 00:37:04.641 "params": { 00:37:04.641 "name": "Nvme$subsystem", 00:37:04.641 "trtype": "$TEST_TRANSPORT", 00:37:04.641 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:04.641 "adrfam": "ipv4", 00:37:04.641 "trsvcid": "$NVMF_PORT", 00:37:04.641 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:04.641 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:04.641 "hdgst": ${hdgst:-false}, 00:37:04.641 "ddgst": ${ddgst:-false} 00:37:04.641 }, 00:37:04.641 "method": "bdev_nvme_attach_controller" 00:37:04.641 } 00:37:04.641 EOF 00:37:04.641 )") 00:37:04.641 11:37:57 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:37:04.641 11:37:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:04.641 11:37:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:04.641 11:37:57 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:37:04.641 11:37:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:04.641 11:37:57 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:37:04.641 11:37:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:04.641 11:37:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:37:04.641 11:37:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:04.641 11:37:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:04.641 11:37:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:37:04.641 11:37:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:04.641 11:37:57 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:37:04.641 11:37:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:37:04.641 11:37:57 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:37:04.641 11:37:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:04.641 11:37:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:37:04.641 11:37:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:37:04.641 11:37:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:04.641 "params": { 00:37:04.641 "name": "Nvme0", 00:37:04.641 "trtype": "tcp", 00:37:04.641 "traddr": "10.0.0.2", 00:37:04.641 "adrfam": "ipv4", 00:37:04.641 "trsvcid": "4420", 00:37:04.641 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:04.641 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:04.641 "hdgst": true, 00:37:04.641 "ddgst": true 00:37:04.641 }, 00:37:04.641 "method": "bdev_nvme_attach_controller" 00:37:04.641 }' 00:37:04.641 11:37:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:04.641 11:37:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:04.641 11:37:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:04.641 11:37:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:04.641 11:37:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:04.641 11:37:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:04.925 11:37:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:04.925 11:37:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:04.925 11:37:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:04.925 11:37:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:05.191 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:05.191 ... 00:37:05.191 fio-3.35 00:37:05.191 Starting 3 threads 00:37:17.442 00:37:17.442 filename0: (groupid=0, jobs=1): err= 0: pid=3048183: Wed Nov 20 11:38:08 2024 00:37:17.442 read: IOPS=309, BW=38.7MiB/s (40.6MB/s)(389MiB/10045msec) 00:37:17.442 slat (nsec): min=5789, max=32166, avg=8297.32, stdev=1785.13 00:37:17.442 clat (usec): min=7003, max=52790, avg=9665.96, stdev=2572.57 00:37:17.442 lat (usec): min=7012, max=52799, avg=9674.26, stdev=2572.56 00:37:17.442 clat percentiles (usec): 00:37:17.442 | 1.00th=[ 7898], 5.00th=[ 8356], 10.00th=[ 8586], 20.00th=[ 8848], 00:37:17.442 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9634], 00:37:17.442 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10552], 95.00th=[10814], 00:37:17.442 | 99.00th=[11731], 99.50th=[12256], 99.90th=[51643], 99.95th=[51643], 00:37:17.442 | 99.99th=[52691] 00:37:17.442 bw ( KiB/s): min=33536, max=41216, per=33.15%, avg=39782.40, stdev=1725.12, samples=20 00:37:17.442 iops : min= 262, max= 322, avg=310.80, stdev=13.48, samples=20 00:37:17.442 lat (msec) : 10=75.47%, 20=24.18%, 50=0.06%, 100=0.29% 00:37:17.442 cpu : usr=93.70%, sys=5.77%, ctx=452, majf=0, minf=133 00:37:17.442 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:17.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.442 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.442 issued rwts: total=3110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.442 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:17.442 filename0: (groupid=0, jobs=1): err= 0: pid=3048184: Wed Nov 20 11:38:08 2024 00:37:17.442 read: IOPS=348, BW=43.5MiB/s (45.7MB/s)(437MiB/10045msec) 00:37:17.442 slat (nsec): min=5789, max=56878, avg=8025.51, stdev=1896.25 00:37:17.442 clat (usec): min=4983, max=48116, avg=8590.34, stdev=1167.47 00:37:17.442 lat (usec): min=4990, max=48123, avg=8598.36, stdev=1167.46 00:37:17.442 clat percentiles (usec): 00:37:17.442 | 1.00th=[ 6259], 5.00th=[ 7504], 10.00th=[ 7767], 20.00th=[ 8029], 00:37:17.442 | 30.00th=[ 8291], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8717], 00:37:17.442 | 70.00th=[ 8979], 80.00th=[ 9110], 90.00th=[ 9372], 95.00th=[ 9634], 00:37:17.442 | 99.00th=[10028], 99.50th=[10159], 99.90th=[10945], 99.95th=[47973], 00:37:17.442 | 99.99th=[47973] 00:37:17.442 bw ( KiB/s): min=43264, max=46592, per=37.30%, avg=44761.60, stdev=944.36, samples=20 00:37:17.442 iops : min= 338, max= 364, avg=349.70, stdev= 7.38, samples=20 00:37:17.442 lat (msec) : 10=98.94%, 20=1.00%, 50=0.06% 00:37:17.442 cpu : usr=91.86%, sys=5.77%, ctx=866, majf=0, minf=145 00:37:17.442 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:17.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.442 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.442 issued rwts: total=3499,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.442 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:17.442 filename0: (groupid=0, jobs=1): err= 0: pid=3048185: Wed Nov 20 11:38:08 2024 00:37:17.442 read: IOPS=279, BW=34.9MiB/s (36.6MB/s)(351MiB/10044msec) 00:37:17.442 slat (nsec): min=5793, max=31290, avg=7720.82, stdev=1573.54 00:37:17.442 clat (usec): min=6422, max=47507, avg=10706.05, stdev=1331.50 00:37:17.442 lat (usec): min=6431, max=47513, avg=10713.77, stdev=1331.45 00:37:17.442 clat percentiles (usec): 00:37:17.442 | 1.00th=[ 7898], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[10028], 00:37:17.442 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10683], 60.00th=[10945], 00:37:17.442 | 70.00th=[11076], 80.00th=[11469], 90.00th=[11863], 95.00th=[12125], 00:37:17.442 | 99.00th=[13042], 99.50th=[13173], 99.90th=[14222], 99.95th=[45876], 00:37:17.442 | 99.99th=[47449] 00:37:17.442 bw ( KiB/s): min=34048, max=38912, per=29.93%, avg=35916.80, stdev=914.01, samples=20 00:37:17.442 iops : min= 266, max= 304, avg=280.60, stdev= 7.14, samples=20 00:37:17.442 lat (msec) : 10=20.41%, 20=79.52%, 50=0.07% 00:37:17.442 cpu : usr=94.32%, sys=5.42%, ctx=23, majf=0, minf=121 00:37:17.442 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:17.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.442 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:17.442 issued rwts: total=2808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:17.442 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:17.442 00:37:17.442 Run status group 0 (all jobs): 00:37:17.442 READ: bw=117MiB/s (123MB/s), 34.9MiB/s-43.5MiB/s (36.6MB/s-45.7MB/s), io=1177MiB (1234MB), run=10044-10045msec 00:37:17.442 11:38:08 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:17.442 11:38:08 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:37:17.442 11:38:08 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:37:17.442 11:38:08 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:17.442 11:38:08 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:37:17.442 11:38:08 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:17.442 11:38:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.442 11:38:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:17.442 11:38:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.442 11:38:08 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:17.442 11:38:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.442 11:38:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:17.442 11:38:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.442 00:37:17.442 real 0m11.206s 00:37:17.442 user 0m44.969s 00:37:17.442 sys 0m2.025s 00:37:17.442 11:38:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:17.442 11:38:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:17.442 ************************************ 00:37:17.442 END TEST fio_dif_digest 00:37:17.442 ************************************ 00:37:17.442 11:38:08 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:17.442 11:38:08 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:37:17.442 11:38:08 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:17.442 11:38:08 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:37:17.442 11:38:08 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:17.442 11:38:08 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:37:17.442 11:38:08 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:17.442 11:38:08 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:17.442 rmmod nvme_tcp 00:37:17.442 rmmod nvme_fabrics 00:37:17.442 rmmod nvme_keyring 00:37:17.442 11:38:08 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:17.442 11:38:08 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:37:17.442 11:38:08 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:37:17.442 11:38:08 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3037988 ']' 00:37:17.442 11:38:08 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3037988 00:37:17.442 11:38:08 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 3037988 ']' 00:37:17.442 11:38:08 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 3037988 00:37:17.442 11:38:08 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:37:17.442 11:38:08 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:17.442 11:38:08 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3037988 00:37:17.442 11:38:08 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:17.442 11:38:08 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:17.442 11:38:08 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3037988' 00:37:17.442 killing process with pid 3037988 00:37:17.442 11:38:08 nvmf_dif -- common/autotest_common.sh@973 -- # kill 3037988 00:37:17.442 11:38:08 nvmf_dif -- common/autotest_common.sh@978 -- # wait 3037988 00:37:17.442 11:38:08 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:17.442 11:38:08 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:19.367 Waiting for block devices as requested 00:37:19.629 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:19.629 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:19.629 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:19.890 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:19.890 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:19.890 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:20.151 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:20.151 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:20.151 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:20.412 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:20.412 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:20.673 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:20.673 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:20.673 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:20.673 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:20.934 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:20.934 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:21.196 11:38:13 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:21.196 11:38:13 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:21.196 11:38:13 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:37:21.196 11:38:13 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:37:21.196 11:38:13 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:21.196 11:38:13 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:37:21.196 11:38:13 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:21.196 11:38:13 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:21.196 11:38:13 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:21.196 11:38:13 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:21.196 11:38:13 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:23.740 11:38:15 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:23.740 00:37:23.740 real 1m18.570s 00:37:23.740 user 7m58.259s 00:37:23.740 sys 0m22.550s 00:37:23.740 11:38:15 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:23.740 11:38:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:23.740 ************************************ 00:37:23.740 END TEST nvmf_dif 00:37:23.740 ************************************ 00:37:23.740 11:38:15 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:23.740 11:38:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:23.740 11:38:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:23.740 11:38:15 -- common/autotest_common.sh@10 -- # set +x 00:37:23.740 ************************************ 00:37:23.740 START TEST nvmf_abort_qd_sizes 00:37:23.740 ************************************ 00:37:23.740 11:38:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:23.740 * Looking for test storage... 00:37:23.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:23.740 11:38:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:23.740 11:38:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:37:23.740 11:38:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:23.740 11:38:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:23.740 11:38:16 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:23.740 11:38:16 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:23.740 11:38:16 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:23.740 11:38:16 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:37:23.740 11:38:16 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:37:23.740 11:38:16 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:37:23.740 11:38:16 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:37:23.740 11:38:16 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:37:23.740 11:38:16 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:37:23.740 11:38:16 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:23.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:23.741 --rc genhtml_branch_coverage=1 00:37:23.741 --rc genhtml_function_coverage=1 00:37:23.741 --rc genhtml_legend=1 00:37:23.741 --rc geninfo_all_blocks=1 00:37:23.741 --rc geninfo_unexecuted_blocks=1 00:37:23.741 00:37:23.741 ' 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:23.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:23.741 --rc genhtml_branch_coverage=1 00:37:23.741 --rc genhtml_function_coverage=1 00:37:23.741 --rc genhtml_legend=1 00:37:23.741 --rc geninfo_all_blocks=1 00:37:23.741 --rc geninfo_unexecuted_blocks=1 00:37:23.741 00:37:23.741 ' 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:23.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:23.741 --rc genhtml_branch_coverage=1 00:37:23.741 --rc genhtml_function_coverage=1 00:37:23.741 --rc genhtml_legend=1 00:37:23.741 --rc geninfo_all_blocks=1 00:37:23.741 --rc geninfo_unexecuted_blocks=1 00:37:23.741 00:37:23.741 ' 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:23.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:23.741 --rc genhtml_branch_coverage=1 00:37:23.741 --rc genhtml_function_coverage=1 00:37:23.741 --rc genhtml_legend=1 00:37:23.741 --rc geninfo_all_blocks=1 00:37:23.741 --rc geninfo_unexecuted_blocks=1 00:37:23.741 00:37:23.741 ' 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:23.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:37:23.741 11:38:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:31.880 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:31.880 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:31.880 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:31.880 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:31.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:31.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:37:31.880 00:37:31.880 --- 10.0.0.2 ping statistics --- 00:37:31.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:31.880 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:31.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:31.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:37:31.880 00:37:31.880 --- 10.0.0.1 ping statistics --- 00:37:31.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:31.880 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:37:31.880 11:38:23 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:34.425 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:34.425 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:34.425 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:34.425 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:34.425 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:34.425 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:34.425 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:34.425 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:34.425 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:34.685 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:34.685 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:34.685 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:34.685 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:34.685 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:34.685 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:34.685 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:34.685 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:34.946 11:38:27 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:34.946 11:38:27 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:34.946 11:38:27 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:34.946 11:38:27 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:34.946 11:38:27 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:34.946 11:38:27 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:35.207 11:38:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:37:35.207 11:38:27 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:35.207 11:38:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:35.207 11:38:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:35.207 11:38:27 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3057626 00:37:35.207 11:38:27 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3057626 00:37:35.207 11:38:27 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:37:35.207 11:38:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 3057626 ']' 00:37:35.207 11:38:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:35.207 11:38:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:35.207 11:38:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:35.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:35.207 11:38:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:35.207 11:38:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:35.207 [2024-11-20 11:38:27.756200] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:37:35.207 [2024-11-20 11:38:27.756247] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:35.207 [2024-11-20 11:38:27.849232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:35.207 [2024-11-20 11:38:27.881060] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:35.207 [2024-11-20 11:38:27.881087] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:35.207 [2024-11-20 11:38:27.881092] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:35.207 [2024-11-20 11:38:27.881097] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:35.207 [2024-11-20 11:38:27.881102] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:35.207 [2024-11-20 11:38:27.882277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:35.207 [2024-11-20 11:38:27.882540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:35.207 [2024-11-20 11:38:27.882662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:35.207 [2024-11-20 11:38:27.882663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:36.146 11:38:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:36.146 11:38:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:37:36.146 11:38:28 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:36.146 11:38:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:36.146 11:38:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:36.146 11:38:28 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:36.146 11:38:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:37:36.146 11:38:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:37:36.146 11:38:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:37:36.146 11:38:28 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:37:36.146 11:38:28 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:37:36.146 11:38:28 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:37:36.146 11:38:28 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:37:36.146 11:38:28 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:37:36.146 11:38:28 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:37:36.146 11:38:28 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:37:36.146 11:38:28 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:37:36.146 11:38:28 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:37:36.146 11:38:28 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:37:36.146 11:38:28 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:37:36.146 11:38:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:37:36.146 11:38:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:37:36.146 11:38:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:37:36.146 11:38:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:36.146 11:38:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:36.146 11:38:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:36.146 ************************************ 00:37:36.146 START TEST spdk_target_abort 00:37:36.146 ************************************ 00:37:36.146 11:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:37:36.146 11:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:37:36.146 11:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:37:36.146 11:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.146 11:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:36.407 spdk_targetn1 00:37:36.407 11:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.407 11:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:36.407 11:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.407 11:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:36.407 [2024-11-20 11:38:28.960578] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:36.407 11:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.407 11:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:37:36.407 11:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.407 11:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:36.407 11:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.407 11:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:37:36.407 11:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.407 11:38:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:36.407 11:38:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.407 11:38:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:37:36.407 11:38:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.407 11:38:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:36.407 [2024-11-20 11:38:29.020907] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:36.407 11:38:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.407 11:38:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:37:36.407 11:38:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:36.407 11:38:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:36.407 11:38:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:37:36.407 11:38:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:36.407 11:38:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:36.407 11:38:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:36.407 11:38:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:36.407 11:38:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:36.407 11:38:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:36.407 11:38:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:36.407 11:38:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:36.407 11:38:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:36.407 11:38:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:36.407 11:38:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:37:36.407 11:38:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:36.407 11:38:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:36.407 11:38:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:36.407 11:38:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:36.407 11:38:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:36.407 11:38:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:36.676 [2024-11-20 11:38:29.278734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:272 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:36.676 [2024-11-20 11:38:29.278771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0024 p:1 m:0 dnr:0 00:37:36.676 [2024-11-20 11:38:29.310635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:1320 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:36.676 [2024-11-20 11:38:29.310658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00a8 p:1 m:0 dnr:0 00:37:36.676 [2024-11-20 11:38:29.311577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:1384 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:36.676 [2024-11-20 11:38:29.311594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00ae p:1 m:0 dnr:0 00:37:36.676 [2024-11-20 11:38:29.334665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2080 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:37:36.676 [2024-11-20 11:38:29.334686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:37:36.676 [2024-11-20 11:38:29.351190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2640 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:36.676 [2024-11-20 11:38:29.351211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:37:36.676 [2024-11-20 11:38:29.358693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2864 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:36.676 [2024-11-20 11:38:29.358717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:36.676 [2024-11-20 11:38:29.383832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3744 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:36.676 [2024-11-20 11:38:29.383853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00d6 p:0 m:0 dnr:0 00:37:39.969 Initializing NVMe Controllers 00:37:39.969 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:39.969 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:39.969 Initialization complete. Launching workers. 00:37:39.969 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11352, failed: 7 00:37:39.969 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1279, failed to submit 10080 00:37:39.969 success 774, unsuccessful 505, failed 0 00:37:39.969 11:38:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:39.969 11:38:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:39.969 [2024-11-20 11:38:32.611976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:188 nsid:1 lba:528 len:8 PRP1 0x200004e56000 PRP2 0x0 00:37:39.969 [2024-11-20 11:38:32.612012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:188 cdw0:0 sqhd:0048 p:1 m:0 dnr:0 00:37:39.969 [2024-11-20 11:38:32.674960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:182 nsid:1 lba:1976 len:8 PRP1 0x200004e44000 PRP2 0x0 00:37:39.969 [2024-11-20 11:38:32.674981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:182 cdw0:0 sqhd:00ff p:1 m:0 dnr:0 00:37:40.229 [2024-11-20 11:38:32.735892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:3552 len:8 PRP1 0x200004e5c000 PRP2 0x0 00:37:40.229 [2024-11-20 11:38:32.735913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:00bd p:0 m:0 dnr:0 00:37:40.229 [2024-11-20 11:38:32.751979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:174 nsid:1 lba:3840 len:8 PRP1 0x200004e5a000 PRP2 0x0 00:37:40.229 [2024-11-20 11:38:32.751998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:174 cdw0:0 sqhd:00e9 p:0 m:0 dnr:0 00:37:41.612 [2024-11-20 11:38:34.102060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:34560 len:8 PRP1 0x200004e62000 PRP2 0x0 00:37:41.612 [2024-11-20 11:38:34.102088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:00e6 p:1 m:0 dnr:0 00:37:43.524 Initializing NVMe Controllers 00:37:43.524 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:43.524 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:43.524 Initialization complete. Launching workers. 00:37:43.524 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8558, failed: 5 00:37:43.524 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1229, failed to submit 7334 00:37:43.524 success 358, unsuccessful 871, failed 0 00:37:43.524 11:38:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:43.524 11:38:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:46.065 [2024-11-20 11:38:38.733913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:161 nsid:1 lba:317136 len:8 PRP1 0x200004b08000 PRP2 0x0 00:37:46.065 [2024-11-20 11:38:38.733946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:161 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:37:46.325 Initializing NVMe Controllers 00:37:46.325 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:46.325 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:46.325 Initialization complete. Launching workers. 00:37:46.325 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43706, failed: 1 00:37:46.325 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2680, failed to submit 41027 00:37:46.325 success 590, unsuccessful 2090, failed 0 00:37:46.325 11:38:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:37:46.325 11:38:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.325 11:38:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:46.325 11:38:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.325 11:38:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:37:46.325 11:38:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.325 11:38:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:48.246 11:38:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.246 11:38:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3057626 00:37:48.246 11:38:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 3057626 ']' 00:37:48.246 11:38:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 3057626 00:37:48.246 11:38:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:37:48.246 11:38:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:48.246 11:38:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3057626 00:37:48.246 11:38:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:48.246 11:38:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:48.247 11:38:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3057626' 00:37:48.247 killing process with pid 3057626 00:37:48.247 11:38:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 3057626 00:37:48.247 11:38:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 3057626 00:37:48.508 00:37:48.508 real 0m12.406s 00:37:48.508 user 0m50.519s 00:37:48.508 sys 0m2.023s 00:37:48.508 11:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:48.508 11:38:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:48.508 ************************************ 00:37:48.508 END TEST spdk_target_abort 00:37:48.508 ************************************ 00:37:48.508 11:38:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:37:48.508 11:38:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:48.508 11:38:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:48.508 11:38:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:48.508 ************************************ 00:37:48.508 START TEST kernel_target_abort 00:37:48.508 ************************************ 00:37:48.508 11:38:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:37:48.508 11:38:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:37:48.508 11:38:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:37:48.508 11:38:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:48.508 11:38:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:48.508 11:38:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:48.508 11:38:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:48.508 11:38:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:48.508 11:38:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:48.508 11:38:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:48.508 11:38:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:48.508 11:38:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:48.508 11:38:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:48.508 11:38:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:48.508 11:38:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:37:48.508 11:38:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:48.508 11:38:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:48.508 11:38:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:48.508 11:38:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:37:48.508 11:38:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:37:48.508 11:38:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:37:48.508 11:38:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:48.508 11:38:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:51.808 Waiting for block devices as requested 00:37:51.808 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:52.069 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:52.069 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:52.069 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:52.329 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:52.329 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:52.330 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:52.330 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:52.589 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:52.589 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:52.849 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:52.849 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:52.849 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:53.111 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:53.111 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:53.111 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:53.372 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:53.632 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:37:53.632 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:53.632 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:37:53.632 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:37:53.632 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:53.632 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:37:53.632 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:37:53.633 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:53.633 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:53.633 No valid GPT data, bailing 00:37:53.633 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:53.633 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:37:53.633 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:37:53.633 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:37:53.633 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:37:53.633 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:53.633 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:53.633 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:53.633 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:53.633 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:37:53.633 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:37:53.633 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:37:53.633 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:37:53.633 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:37:53.633 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:37:53.633 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:37:53.633 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:53.633 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:37:53.894 00:37:53.894 Discovery Log Number of Records 2, Generation counter 2 00:37:53.894 =====Discovery Log Entry 0====== 00:37:53.894 trtype: tcp 00:37:53.894 adrfam: ipv4 00:37:53.894 subtype: current discovery subsystem 00:37:53.894 treq: not specified, sq flow control disable supported 00:37:53.894 portid: 1 00:37:53.894 trsvcid: 4420 00:37:53.894 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:53.894 traddr: 10.0.0.1 00:37:53.894 eflags: none 00:37:53.894 sectype: none 00:37:53.894 =====Discovery Log Entry 1====== 00:37:53.894 trtype: tcp 00:37:53.894 adrfam: ipv4 00:37:53.894 subtype: nvme subsystem 00:37:53.894 treq: not specified, sq flow control disable supported 00:37:53.894 portid: 1 00:37:53.894 trsvcid: 4420 00:37:53.894 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:53.894 traddr: 10.0.0.1 00:37:53.894 eflags: none 00:37:53.894 sectype: none 00:37:53.894 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:37:53.894 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:53.894 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:53.894 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:53.894 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:53.894 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:53.894 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:53.894 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:53.894 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:53.894 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:53.894 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:53.894 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:53.894 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:53.894 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:53.894 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:53.894 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:53.894 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:53.894 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:53.894 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:53.894 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:53.894 11:38:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:57.214 Initializing NVMe Controllers 00:37:57.214 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:57.214 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:57.214 Initialization complete. Launching workers. 00:37:57.214 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67816, failed: 0 00:37:57.214 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67816, failed to submit 0 00:37:57.214 success 0, unsuccessful 67816, failed 0 00:37:57.214 11:38:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:57.214 11:38:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:00.515 Initializing NVMe Controllers 00:38:00.515 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:00.516 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:00.516 Initialization complete. Launching workers. 00:38:00.516 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 121814, failed: 0 00:38:00.516 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30674, failed to submit 91140 00:38:00.516 success 0, unsuccessful 30674, failed 0 00:38:00.516 11:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:00.516 11:38:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:03.175 Initializing NVMe Controllers 00:38:03.175 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:03.175 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:03.175 Initialization complete. Launching workers. 00:38:03.175 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 145530, failed: 0 00:38:03.175 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36422, failed to submit 109108 00:38:03.175 success 0, unsuccessful 36422, failed 0 00:38:03.175 11:38:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:38:03.175 11:38:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:03.175 11:38:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:38:03.175 11:38:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:03.175 11:38:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:03.175 11:38:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:03.175 11:38:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:03.175 11:38:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:38:03.175 11:38:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:38:03.175 11:38:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:07.381 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:07.381 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:07.381 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:07.381 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:07.381 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:07.381 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:07.381 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:07.381 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:07.381 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:07.381 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:07.381 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:07.381 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:07.381 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:07.382 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:07.382 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:07.382 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:08.769 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:38:09.030 00:38:09.030 real 0m20.386s 00:38:09.030 user 0m9.866s 00:38:09.030 sys 0m6.154s 00:38:09.030 11:39:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:09.030 11:39:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:09.030 ************************************ 00:38:09.030 END TEST kernel_target_abort 00:38:09.030 ************************************ 00:38:09.030 11:39:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:09.030 11:39:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:38:09.030 11:39:01 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:09.030 11:39:01 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:38:09.030 11:39:01 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:09.030 11:39:01 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:38:09.030 11:39:01 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:09.030 11:39:01 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:09.030 rmmod nvme_tcp 00:38:09.030 rmmod nvme_fabrics 00:38:09.030 rmmod nvme_keyring 00:38:09.030 11:39:01 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:09.030 11:39:01 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:38:09.030 11:39:01 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:38:09.030 11:39:01 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3057626 ']' 00:38:09.030 11:39:01 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3057626 00:38:09.030 11:39:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 3057626 ']' 00:38:09.030 11:39:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 3057626 00:38:09.030 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3057626) - No such process 00:38:09.030 11:39:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 3057626 is not found' 00:38:09.030 Process with pid 3057626 is not found 00:38:09.030 11:39:01 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:38:09.031 11:39:01 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:12.363 Waiting for block devices as requested 00:38:12.363 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:12.624 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:12.624 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:12.624 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:12.884 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:12.884 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:12.884 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:13.145 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:13.145 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:13.405 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:13.405 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:13.405 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:13.666 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:13.666 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:13.666 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:13.927 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:13.927 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:14.188 11:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:14.188 11:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:14.188 11:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:38:14.188 11:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:38:14.188 11:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:14.188 11:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:38:14.188 11:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:14.188 11:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:14.188 11:39:06 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:14.188 11:39:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:14.188 11:39:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:16.734 11:39:08 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:16.734 00:38:16.734 real 0m52.898s 00:38:16.734 user 1m5.887s 00:38:16.734 sys 0m19.410s 00:38:16.734 11:39:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:16.734 11:39:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:16.734 ************************************ 00:38:16.734 END TEST nvmf_abort_qd_sizes 00:38:16.734 ************************************ 00:38:16.734 11:39:08 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:16.734 11:39:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:16.734 11:39:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:16.734 11:39:08 -- common/autotest_common.sh@10 -- # set +x 00:38:16.734 ************************************ 00:38:16.734 START TEST keyring_file 00:38:16.734 ************************************ 00:38:16.734 11:39:09 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:16.734 * Looking for test storage... 00:38:16.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:16.734 11:39:09 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:16.734 11:39:09 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:38:16.734 11:39:09 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:16.734 11:39:09 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:16.734 11:39:09 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:16.734 11:39:09 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:16.734 11:39:09 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:16.734 11:39:09 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:38:16.734 11:39:09 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:38:16.734 11:39:09 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:38:16.734 11:39:09 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:38:16.734 11:39:09 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:38:16.734 11:39:09 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:38:16.734 11:39:09 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:38:16.734 11:39:09 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:16.734 11:39:09 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:38:16.734 11:39:09 keyring_file -- scripts/common.sh@345 -- # : 1 00:38:16.734 11:39:09 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:16.734 11:39:09 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:16.734 11:39:09 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:38:16.734 11:39:09 keyring_file -- scripts/common.sh@353 -- # local d=1 00:38:16.734 11:39:09 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:16.734 11:39:09 keyring_file -- scripts/common.sh@355 -- # echo 1 00:38:16.734 11:39:09 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:38:16.734 11:39:09 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:38:16.734 11:39:09 keyring_file -- scripts/common.sh@353 -- # local d=2 00:38:16.734 11:39:09 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:16.734 11:39:09 keyring_file -- scripts/common.sh@355 -- # echo 2 00:38:16.734 11:39:09 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:38:16.734 11:39:09 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:16.734 11:39:09 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:16.734 11:39:09 keyring_file -- scripts/common.sh@368 -- # return 0 00:38:16.734 11:39:09 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:16.734 11:39:09 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:16.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:16.734 --rc genhtml_branch_coverage=1 00:38:16.734 --rc genhtml_function_coverage=1 00:38:16.734 --rc genhtml_legend=1 00:38:16.734 --rc geninfo_all_blocks=1 00:38:16.734 --rc geninfo_unexecuted_blocks=1 00:38:16.734 00:38:16.734 ' 00:38:16.734 11:39:09 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:16.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:16.734 --rc genhtml_branch_coverage=1 00:38:16.734 --rc genhtml_function_coverage=1 00:38:16.734 --rc genhtml_legend=1 00:38:16.734 --rc geninfo_all_blocks=1 00:38:16.734 --rc geninfo_unexecuted_blocks=1 00:38:16.734 00:38:16.734 ' 00:38:16.734 11:39:09 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:16.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:16.734 --rc genhtml_branch_coverage=1 00:38:16.734 --rc genhtml_function_coverage=1 00:38:16.734 --rc genhtml_legend=1 00:38:16.734 --rc geninfo_all_blocks=1 00:38:16.734 --rc geninfo_unexecuted_blocks=1 00:38:16.734 00:38:16.734 ' 00:38:16.734 11:39:09 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:16.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:16.734 --rc genhtml_branch_coverage=1 00:38:16.734 --rc genhtml_function_coverage=1 00:38:16.734 --rc genhtml_legend=1 00:38:16.734 --rc geninfo_all_blocks=1 00:38:16.734 --rc geninfo_unexecuted_blocks=1 00:38:16.734 00:38:16.734 ' 00:38:16.734 11:39:09 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:16.734 11:39:09 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:16.734 11:39:09 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:38:16.734 11:39:09 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:16.734 11:39:09 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:16.734 11:39:09 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:16.734 11:39:09 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:16.735 11:39:09 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:16.735 11:39:09 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:16.735 11:39:09 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:16.735 11:39:09 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:16.735 11:39:09 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:16.735 11:39:09 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:16.735 11:39:09 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:16.735 11:39:09 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:16.735 11:39:09 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:16.735 11:39:09 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:16.735 11:39:09 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:16.735 11:39:09 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:16.735 11:39:09 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:16.735 11:39:09 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:38:16.735 11:39:09 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:16.735 11:39:09 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:16.735 11:39:09 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:16.735 11:39:09 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:16.735 11:39:09 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:16.735 11:39:09 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:16.735 11:39:09 keyring_file -- paths/export.sh@5 -- # export PATH 00:38:16.735 11:39:09 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:16.735 11:39:09 keyring_file -- nvmf/common.sh@51 -- # : 0 00:38:16.735 11:39:09 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:16.735 11:39:09 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:16.735 11:39:09 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:16.735 11:39:09 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:16.735 11:39:09 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:16.735 11:39:09 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:16.735 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:16.735 11:39:09 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:16.735 11:39:09 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:16.735 11:39:09 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:16.735 11:39:09 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:16.735 11:39:09 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:16.735 11:39:09 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:16.735 11:39:09 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:38:16.735 11:39:09 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:38:16.735 11:39:09 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:38:16.735 11:39:09 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:16.735 11:39:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:16.735 11:39:09 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:16.735 11:39:09 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:16.735 11:39:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:16.735 11:39:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:16.735 11:39:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.vcXQOHsbWr 00:38:16.735 11:39:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:16.735 11:39:09 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:16.735 11:39:09 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:16.735 11:39:09 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:16.735 11:39:09 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:16.735 11:39:09 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:16.735 11:39:09 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:16.735 11:39:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.vcXQOHsbWr 00:38:16.735 11:39:09 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.vcXQOHsbWr 00:38:16.735 11:39:09 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.vcXQOHsbWr 00:38:16.735 11:39:09 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:38:16.735 11:39:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:16.735 11:39:09 keyring_file -- keyring/common.sh@17 -- # name=key1 00:38:16.735 11:39:09 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:16.735 11:39:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:16.735 11:39:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:16.735 11:39:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.LTOaUmNNvH 00:38:16.735 11:39:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:16.735 11:39:09 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:16.735 11:39:09 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:16.735 11:39:09 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:16.735 11:39:09 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:16.735 11:39:09 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:16.735 11:39:09 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:16.735 11:39:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.LTOaUmNNvH 00:38:16.735 11:39:09 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.LTOaUmNNvH 00:38:16.735 11:39:09 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.LTOaUmNNvH 00:38:16.735 11:39:09 keyring_file -- keyring/file.sh@30 -- # tgtpid=3068422 00:38:16.735 11:39:09 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3068422 00:38:16.735 11:39:09 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:16.735 11:39:09 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3068422 ']' 00:38:16.735 11:39:09 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:16.735 11:39:09 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:16.735 11:39:09 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:16.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:16.735 11:39:09 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:16.735 11:39:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:16.735 [2024-11-20 11:39:09.460205] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:38:16.735 [2024-11-20 11:39:09.460283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3068422 ] 00:38:16.996 [2024-11-20 11:39:09.554665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:16.996 [2024-11-20 11:39:09.609947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:17.566 11:39:10 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:17.566 11:39:10 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:17.567 11:39:10 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:38:17.567 11:39:10 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.567 11:39:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:17.567 [2024-11-20 11:39:10.274251] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:17.567 null0 00:38:17.828 [2024-11-20 11:39:10.306296] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:17.828 [2024-11-20 11:39:10.306950] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:17.828 11:39:10 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.828 11:39:10 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:17.828 11:39:10 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:17.828 11:39:10 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:17.828 11:39:10 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:38:17.828 11:39:10 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:17.828 11:39:10 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:38:17.828 11:39:10 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:17.828 11:39:10 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:17.828 11:39:10 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.828 11:39:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:17.828 [2024-11-20 11:39:10.338351] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:38:17.828 request: 00:38:17.828 { 00:38:17.828 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:38:17.828 "secure_channel": false, 00:38:17.828 "listen_address": { 00:38:17.828 "trtype": "tcp", 00:38:17.828 "traddr": "127.0.0.1", 00:38:17.828 "trsvcid": "4420" 00:38:17.828 }, 00:38:17.828 "method": "nvmf_subsystem_add_listener", 00:38:17.828 "req_id": 1 00:38:17.828 } 00:38:17.828 Got JSON-RPC error response 00:38:17.828 response: 00:38:17.828 { 00:38:17.828 "code": -32602, 00:38:17.828 "message": "Invalid parameters" 00:38:17.828 } 00:38:17.828 11:39:10 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:38:17.828 11:39:10 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:17.828 11:39:10 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:17.828 11:39:10 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:17.828 11:39:10 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:17.828 11:39:10 keyring_file -- keyring/file.sh@47 -- # bperfpid=3068595 00:38:17.828 11:39:10 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3068595 /var/tmp/bperf.sock 00:38:17.828 11:39:10 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:38:17.828 11:39:10 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3068595 ']' 00:38:17.828 11:39:10 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:17.828 11:39:10 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:17.828 11:39:10 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:17.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:17.828 11:39:10 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:17.828 11:39:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:17.828 [2024-11-20 11:39:10.398682] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:38:17.828 [2024-11-20 11:39:10.398748] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3068595 ] 00:38:17.828 [2024-11-20 11:39:10.491667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:17.828 [2024-11-20 11:39:10.545083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:18.771 11:39:11 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:18.771 11:39:11 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:18.771 11:39:11 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vcXQOHsbWr 00:38:18.771 11:39:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vcXQOHsbWr 00:38:18.771 11:39:11 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.LTOaUmNNvH 00:38:18.771 11:39:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.LTOaUmNNvH 00:38:19.032 11:39:11 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:38:19.032 11:39:11 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:38:19.032 11:39:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:19.032 11:39:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:19.032 11:39:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:19.032 11:39:11 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.vcXQOHsbWr == \/\t\m\p\/\t\m\p\.\v\c\X\Q\O\H\s\b\W\r ]] 00:38:19.293 11:39:11 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:38:19.293 11:39:11 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:38:19.293 11:39:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:19.293 11:39:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:19.293 11:39:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:19.293 11:39:11 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.LTOaUmNNvH == \/\t\m\p\/\t\m\p\.\L\T\O\a\U\m\N\N\v\H ]] 00:38:19.293 11:39:11 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:38:19.293 11:39:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:19.293 11:39:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:19.293 11:39:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:19.293 11:39:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:19.293 11:39:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:19.554 11:39:12 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:38:19.554 11:39:12 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:38:19.554 11:39:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:19.554 11:39:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:19.554 11:39:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:19.554 11:39:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:19.554 11:39:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:19.815 11:39:12 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:38:19.815 11:39:12 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:19.815 11:39:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:19.815 [2024-11-20 11:39:12.512317] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:20.077 nvme0n1 00:38:20.077 11:39:12 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:38:20.077 11:39:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:20.077 11:39:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:20.077 11:39:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:20.077 11:39:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:20.077 11:39:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:20.077 11:39:12 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:38:20.077 11:39:12 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:38:20.077 11:39:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:20.077 11:39:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:20.077 11:39:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:20.077 11:39:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:20.077 11:39:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:20.338 11:39:12 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:38:20.338 11:39:12 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:20.598 Running I/O for 1 seconds... 00:38:21.540 16086.00 IOPS, 62.84 MiB/s 00:38:21.540 Latency(us) 00:38:21.540 [2024-11-20T10:39:14.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:21.540 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:38:21.540 nvme0n1 : 1.00 16152.18 63.09 0.00 0.00 7910.54 2416.64 14090.24 00:38:21.540 [2024-11-20T10:39:14.282Z] =================================================================================================================== 00:38:21.540 [2024-11-20T10:39:14.282Z] Total : 16152.18 63.09 0.00 0.00 7910.54 2416.64 14090.24 00:38:21.540 { 00:38:21.540 "results": [ 00:38:21.540 { 00:38:21.540 "job": "nvme0n1", 00:38:21.540 "core_mask": "0x2", 00:38:21.540 "workload": "randrw", 00:38:21.540 "percentage": 50, 00:38:21.540 "status": "finished", 00:38:21.540 "queue_depth": 128, 00:38:21.540 "io_size": 4096, 00:38:21.540 "runtime": 1.003889, 00:38:21.540 "iops": 16152.184155818024, 00:38:21.540 "mibps": 63.094469358664156, 00:38:21.540 "io_failed": 0, 00:38:21.540 "io_timeout": 0, 00:38:21.540 "avg_latency_us": 7910.5413538904295, 00:38:21.540 "min_latency_us": 2416.64, 00:38:21.540 "max_latency_us": 14090.24 00:38:21.540 } 00:38:21.540 ], 00:38:21.540 "core_count": 1 00:38:21.540 } 00:38:21.540 11:39:14 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:21.540 11:39:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:21.800 11:39:14 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:38:21.800 11:39:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:21.800 11:39:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:21.800 11:39:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:21.800 11:39:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:21.800 11:39:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:21.800 11:39:14 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:38:21.800 11:39:14 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:38:21.800 11:39:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:21.800 11:39:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:21.800 11:39:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:21.800 11:39:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:21.800 11:39:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:22.061 11:39:14 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:38:22.061 11:39:14 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:22.061 11:39:14 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:22.061 11:39:14 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:22.061 11:39:14 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:22.061 11:39:14 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:22.061 11:39:14 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:22.061 11:39:14 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:22.061 11:39:14 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:22.061 11:39:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:22.320 [2024-11-20 11:39:14.819905] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:22.320 [2024-11-20 11:39:14.820444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13afc10 (107): Transport endpoint is not connected 00:38:22.320 [2024-11-20 11:39:14.821440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13afc10 (9): Bad file descriptor 00:38:22.320 [2024-11-20 11:39:14.822442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:22.320 [2024-11-20 11:39:14.822454] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:22.320 [2024-11-20 11:39:14.822460] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:22.320 [2024-11-20 11:39:14.822466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:22.320 request: 00:38:22.320 { 00:38:22.320 "name": "nvme0", 00:38:22.320 "trtype": "tcp", 00:38:22.320 "traddr": "127.0.0.1", 00:38:22.320 "adrfam": "ipv4", 00:38:22.320 "trsvcid": "4420", 00:38:22.320 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:22.320 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:22.320 "prchk_reftag": false, 00:38:22.320 "prchk_guard": false, 00:38:22.320 "hdgst": false, 00:38:22.320 "ddgst": false, 00:38:22.320 "psk": "key1", 00:38:22.320 "allow_unrecognized_csi": false, 00:38:22.320 "method": "bdev_nvme_attach_controller", 00:38:22.320 "req_id": 1 00:38:22.320 } 00:38:22.320 Got JSON-RPC error response 00:38:22.320 response: 00:38:22.320 { 00:38:22.320 "code": -5, 00:38:22.320 "message": "Input/output error" 00:38:22.320 } 00:38:22.320 11:39:14 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:22.320 11:39:14 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:22.320 11:39:14 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:22.320 11:39:14 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:22.320 11:39:14 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:38:22.320 11:39:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:22.320 11:39:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:22.320 11:39:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:22.320 11:39:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:22.320 11:39:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:22.320 11:39:15 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:38:22.320 11:39:15 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:38:22.320 11:39:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:22.320 11:39:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:22.320 11:39:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:22.320 11:39:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:22.320 11:39:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:22.582 11:39:15 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:38:22.582 11:39:15 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:38:22.582 11:39:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:22.843 11:39:15 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:38:22.843 11:39:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:38:22.843 11:39:15 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:38:22.843 11:39:15 keyring_file -- keyring/file.sh@78 -- # jq length 00:38:22.843 11:39:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:23.104 11:39:15 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:38:23.104 11:39:15 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.vcXQOHsbWr 00:38:23.104 11:39:15 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.vcXQOHsbWr 00:38:23.104 11:39:15 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:23.104 11:39:15 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.vcXQOHsbWr 00:38:23.104 11:39:15 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:23.104 11:39:15 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:23.104 11:39:15 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:23.104 11:39:15 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:23.104 11:39:15 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vcXQOHsbWr 00:38:23.104 11:39:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vcXQOHsbWr 00:38:23.365 [2024-11-20 11:39:15.848094] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.vcXQOHsbWr': 0100660 00:38:23.365 [2024-11-20 11:39:15.848111] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:38:23.365 request: 00:38:23.365 { 00:38:23.365 "name": "key0", 00:38:23.365 "path": "/tmp/tmp.vcXQOHsbWr", 00:38:23.365 "method": "keyring_file_add_key", 00:38:23.365 "req_id": 1 00:38:23.365 } 00:38:23.365 Got JSON-RPC error response 00:38:23.365 response: 00:38:23.366 { 00:38:23.366 "code": -1, 00:38:23.366 "message": "Operation not permitted" 00:38:23.366 } 00:38:23.366 11:39:15 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:23.366 11:39:15 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:23.366 11:39:15 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:23.366 11:39:15 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:23.366 11:39:15 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.vcXQOHsbWr 00:38:23.366 11:39:15 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vcXQOHsbWr 00:38:23.366 11:39:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vcXQOHsbWr 00:38:23.366 11:39:16 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.vcXQOHsbWr 00:38:23.366 11:39:16 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:38:23.366 11:39:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:23.366 11:39:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:23.366 11:39:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:23.366 11:39:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:23.366 11:39:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:23.627 11:39:16 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:38:23.627 11:39:16 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:23.627 11:39:16 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:23.627 11:39:16 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:23.627 11:39:16 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:23.627 11:39:16 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:23.627 11:39:16 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:23.627 11:39:16 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:23.627 11:39:16 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:23.627 11:39:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:23.888 [2024-11-20 11:39:16.389468] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.vcXQOHsbWr': No such file or directory 00:38:23.888 [2024-11-20 11:39:16.389481] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:38:23.888 [2024-11-20 11:39:16.389494] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:38:23.888 [2024-11-20 11:39:16.389499] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:38:23.888 [2024-11-20 11:39:16.389505] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:23.888 [2024-11-20 11:39:16.389514] bdev_nvme.c:6763:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:38:23.888 request: 00:38:23.888 { 00:38:23.888 "name": "nvme0", 00:38:23.888 "trtype": "tcp", 00:38:23.888 "traddr": "127.0.0.1", 00:38:23.888 "adrfam": "ipv4", 00:38:23.888 "trsvcid": "4420", 00:38:23.888 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:23.888 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:23.888 "prchk_reftag": false, 00:38:23.888 "prchk_guard": false, 00:38:23.888 "hdgst": false, 00:38:23.888 "ddgst": false, 00:38:23.888 "psk": "key0", 00:38:23.888 "allow_unrecognized_csi": false, 00:38:23.888 "method": "bdev_nvme_attach_controller", 00:38:23.888 "req_id": 1 00:38:23.888 } 00:38:23.888 Got JSON-RPC error response 00:38:23.888 response: 00:38:23.888 { 00:38:23.888 "code": -19, 00:38:23.888 "message": "No such device" 00:38:23.888 } 00:38:23.888 11:39:16 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:23.888 11:39:16 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:23.888 11:39:16 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:23.888 11:39:16 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:23.888 11:39:16 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:38:23.888 11:39:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:23.888 11:39:16 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:23.888 11:39:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:23.888 11:39:16 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:23.888 11:39:16 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:23.888 11:39:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:23.888 11:39:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:23.888 11:39:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.6o8UlfqYap 00:38:23.888 11:39:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:23.888 11:39:16 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:23.888 11:39:16 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:23.888 11:39:16 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:23.888 11:39:16 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:23.888 11:39:16 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:23.888 11:39:16 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:23.888 11:39:16 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.6o8UlfqYap 00:38:24.150 11:39:16 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.6o8UlfqYap 00:38:24.150 11:39:16 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.6o8UlfqYap 00:38:24.150 11:39:16 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6o8UlfqYap 00:38:24.150 11:39:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6o8UlfqYap 00:38:24.150 11:39:16 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:24.150 11:39:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:24.412 nvme0n1 00:38:24.412 11:39:17 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:38:24.412 11:39:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:24.412 11:39:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:24.412 11:39:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:24.412 11:39:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:24.412 11:39:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:24.672 11:39:17 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:38:24.672 11:39:17 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:38:24.672 11:39:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:24.672 11:39:17 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:38:24.672 11:39:17 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:38:24.672 11:39:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:24.672 11:39:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:24.672 11:39:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:24.931 11:39:17 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:38:24.931 11:39:17 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:38:24.931 11:39:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:24.931 11:39:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:24.931 11:39:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:24.931 11:39:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:24.931 11:39:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:25.192 11:39:17 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:38:25.192 11:39:17 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:25.192 11:39:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:25.192 11:39:17 keyring_file -- keyring/file.sh@105 -- # jq length 00:38:25.192 11:39:17 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:38:25.192 11:39:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:25.454 11:39:18 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:38:25.454 11:39:18 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6o8UlfqYap 00:38:25.454 11:39:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6o8UlfqYap 00:38:25.715 11:39:18 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.LTOaUmNNvH 00:38:25.715 11:39:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.LTOaUmNNvH 00:38:25.715 11:39:18 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:25.715 11:39:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:25.976 nvme0n1 00:38:25.976 11:39:18 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:38:25.976 11:39:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:38:26.237 11:39:18 keyring_file -- keyring/file.sh@113 -- # config='{ 00:38:26.237 "subsystems": [ 00:38:26.237 { 00:38:26.237 "subsystem": "keyring", 00:38:26.237 "config": [ 00:38:26.237 { 00:38:26.237 "method": "keyring_file_add_key", 00:38:26.237 "params": { 00:38:26.237 "name": "key0", 00:38:26.237 "path": "/tmp/tmp.6o8UlfqYap" 00:38:26.237 } 00:38:26.237 }, 00:38:26.237 { 00:38:26.237 "method": "keyring_file_add_key", 00:38:26.237 "params": { 00:38:26.237 "name": "key1", 00:38:26.237 "path": "/tmp/tmp.LTOaUmNNvH" 00:38:26.237 } 00:38:26.237 } 00:38:26.237 ] 00:38:26.237 }, 00:38:26.237 { 00:38:26.237 "subsystem": "iobuf", 00:38:26.237 "config": [ 00:38:26.237 { 00:38:26.237 "method": "iobuf_set_options", 00:38:26.237 "params": { 00:38:26.237 "small_pool_count": 8192, 00:38:26.237 "large_pool_count": 1024, 00:38:26.237 "small_bufsize": 8192, 00:38:26.237 "large_bufsize": 135168, 00:38:26.237 "enable_numa": false 00:38:26.237 } 00:38:26.237 } 00:38:26.237 ] 00:38:26.237 }, 00:38:26.237 { 00:38:26.237 "subsystem": "sock", 00:38:26.237 "config": [ 00:38:26.237 { 00:38:26.237 "method": "sock_set_default_impl", 00:38:26.237 "params": { 00:38:26.237 "impl_name": "posix" 00:38:26.237 } 00:38:26.237 }, 00:38:26.237 { 00:38:26.237 "method": "sock_impl_set_options", 00:38:26.237 "params": { 00:38:26.237 "impl_name": "ssl", 00:38:26.237 "recv_buf_size": 4096, 00:38:26.237 "send_buf_size": 4096, 00:38:26.237 "enable_recv_pipe": true, 00:38:26.238 "enable_quickack": false, 00:38:26.238 "enable_placement_id": 0, 00:38:26.238 "enable_zerocopy_send_server": true, 00:38:26.238 "enable_zerocopy_send_client": false, 00:38:26.238 "zerocopy_threshold": 0, 00:38:26.238 "tls_version": 0, 00:38:26.238 "enable_ktls": false 00:38:26.238 } 00:38:26.238 }, 00:38:26.238 { 00:38:26.238 "method": "sock_impl_set_options", 00:38:26.238 "params": { 00:38:26.238 "impl_name": "posix", 00:38:26.238 "recv_buf_size": 2097152, 00:38:26.238 "send_buf_size": 2097152, 00:38:26.238 "enable_recv_pipe": true, 00:38:26.238 "enable_quickack": false, 00:38:26.238 "enable_placement_id": 0, 00:38:26.238 "enable_zerocopy_send_server": true, 00:38:26.238 "enable_zerocopy_send_client": false, 00:38:26.238 "zerocopy_threshold": 0, 00:38:26.238 "tls_version": 0, 00:38:26.238 "enable_ktls": false 00:38:26.238 } 00:38:26.238 } 00:38:26.238 ] 00:38:26.238 }, 00:38:26.238 { 00:38:26.238 "subsystem": "vmd", 00:38:26.238 "config": [] 00:38:26.238 }, 00:38:26.238 { 00:38:26.238 "subsystem": "accel", 00:38:26.238 "config": [ 00:38:26.238 { 00:38:26.238 "method": "accel_set_options", 00:38:26.238 "params": { 00:38:26.238 "small_cache_size": 128, 00:38:26.238 "large_cache_size": 16, 00:38:26.238 "task_count": 2048, 00:38:26.238 "sequence_count": 2048, 00:38:26.238 "buf_count": 2048 00:38:26.238 } 00:38:26.238 } 00:38:26.238 ] 00:38:26.238 }, 00:38:26.238 { 00:38:26.238 "subsystem": "bdev", 00:38:26.238 "config": [ 00:38:26.238 { 00:38:26.238 "method": "bdev_set_options", 00:38:26.238 "params": { 00:38:26.238 "bdev_io_pool_size": 65535, 00:38:26.238 "bdev_io_cache_size": 256, 00:38:26.238 "bdev_auto_examine": true, 00:38:26.238 "iobuf_small_cache_size": 128, 00:38:26.238 "iobuf_large_cache_size": 16 00:38:26.238 } 00:38:26.238 }, 00:38:26.238 { 00:38:26.238 "method": "bdev_raid_set_options", 00:38:26.238 "params": { 00:38:26.238 "process_window_size_kb": 1024, 00:38:26.238 "process_max_bandwidth_mb_sec": 0 00:38:26.238 } 00:38:26.238 }, 00:38:26.238 { 00:38:26.238 "method": "bdev_iscsi_set_options", 00:38:26.238 "params": { 00:38:26.238 "timeout_sec": 30 00:38:26.238 } 00:38:26.238 }, 00:38:26.238 { 00:38:26.238 "method": "bdev_nvme_set_options", 00:38:26.238 "params": { 00:38:26.238 "action_on_timeout": "none", 00:38:26.238 "timeout_us": 0, 00:38:26.238 "timeout_admin_us": 0, 00:38:26.238 "keep_alive_timeout_ms": 10000, 00:38:26.238 "arbitration_burst": 0, 00:38:26.238 "low_priority_weight": 0, 00:38:26.238 "medium_priority_weight": 0, 00:38:26.238 "high_priority_weight": 0, 00:38:26.238 "nvme_adminq_poll_period_us": 10000, 00:38:26.238 "nvme_ioq_poll_period_us": 0, 00:38:26.238 "io_queue_requests": 512, 00:38:26.238 "delay_cmd_submit": true, 00:38:26.238 "transport_retry_count": 4, 00:38:26.238 "bdev_retry_count": 3, 00:38:26.238 "transport_ack_timeout": 0, 00:38:26.238 "ctrlr_loss_timeout_sec": 0, 00:38:26.238 "reconnect_delay_sec": 0, 00:38:26.238 "fast_io_fail_timeout_sec": 0, 00:38:26.238 "disable_auto_failback": false, 00:38:26.238 "generate_uuids": false, 00:38:26.238 "transport_tos": 0, 00:38:26.238 "nvme_error_stat": false, 00:38:26.238 "rdma_srq_size": 0, 00:38:26.238 "io_path_stat": false, 00:38:26.238 "allow_accel_sequence": false, 00:38:26.238 "rdma_max_cq_size": 0, 00:38:26.238 "rdma_cm_event_timeout_ms": 0, 00:38:26.238 "dhchap_digests": [ 00:38:26.238 "sha256", 00:38:26.238 "sha384", 00:38:26.238 "sha512" 00:38:26.238 ], 00:38:26.238 "dhchap_dhgroups": [ 00:38:26.238 "null", 00:38:26.238 "ffdhe2048", 00:38:26.238 "ffdhe3072", 00:38:26.238 "ffdhe4096", 00:38:26.238 "ffdhe6144", 00:38:26.238 "ffdhe8192" 00:38:26.238 ] 00:38:26.238 } 00:38:26.238 }, 00:38:26.238 { 00:38:26.238 "method": "bdev_nvme_attach_controller", 00:38:26.238 "params": { 00:38:26.238 "name": "nvme0", 00:38:26.238 "trtype": "TCP", 00:38:26.238 "adrfam": "IPv4", 00:38:26.238 "traddr": "127.0.0.1", 00:38:26.238 "trsvcid": "4420", 00:38:26.238 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:26.238 "prchk_reftag": false, 00:38:26.238 "prchk_guard": false, 00:38:26.238 "ctrlr_loss_timeout_sec": 0, 00:38:26.238 "reconnect_delay_sec": 0, 00:38:26.238 "fast_io_fail_timeout_sec": 0, 00:38:26.238 "psk": "key0", 00:38:26.238 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:26.238 "hdgst": false, 00:38:26.238 "ddgst": false, 00:38:26.238 "multipath": "multipath" 00:38:26.238 } 00:38:26.238 }, 00:38:26.238 { 00:38:26.238 "method": "bdev_nvme_set_hotplug", 00:38:26.238 "params": { 00:38:26.238 "period_us": 100000, 00:38:26.238 "enable": false 00:38:26.238 } 00:38:26.238 }, 00:38:26.238 { 00:38:26.238 "method": "bdev_wait_for_examine" 00:38:26.238 } 00:38:26.238 ] 00:38:26.238 }, 00:38:26.238 { 00:38:26.238 "subsystem": "nbd", 00:38:26.238 "config": [] 00:38:26.238 } 00:38:26.238 ] 00:38:26.238 }' 00:38:26.238 11:39:18 keyring_file -- keyring/file.sh@115 -- # killprocess 3068595 00:38:26.238 11:39:18 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3068595 ']' 00:38:26.238 11:39:18 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3068595 00:38:26.238 11:39:18 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:26.238 11:39:18 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:26.238 11:39:18 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3068595 00:38:26.238 11:39:18 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:26.238 11:39:18 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:26.238 11:39:18 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3068595' 00:38:26.238 killing process with pid 3068595 00:38:26.238 11:39:18 keyring_file -- common/autotest_common.sh@973 -- # kill 3068595 00:38:26.238 Received shutdown signal, test time was about 1.000000 seconds 00:38:26.238 00:38:26.238 Latency(us) 00:38:26.238 [2024-11-20T10:39:18.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:26.238 [2024-11-20T10:39:18.980Z] =================================================================================================================== 00:38:26.238 [2024-11-20T10:39:18.980Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:26.238 11:39:18 keyring_file -- common/autotest_common.sh@978 -- # wait 3068595 00:38:26.499 11:39:19 keyring_file -- keyring/file.sh@118 -- # bperfpid=3070549 00:38:26.499 11:39:19 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3070549 /var/tmp/bperf.sock 00:38:26.499 11:39:19 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3070549 ']' 00:38:26.499 11:39:19 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:26.499 11:39:19 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:26.499 11:39:19 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:38:26.499 11:39:19 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:26.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:26.499 11:39:19 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:26.499 11:39:19 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:38:26.499 "subsystems": [ 00:38:26.499 { 00:38:26.499 "subsystem": "keyring", 00:38:26.499 "config": [ 00:38:26.499 { 00:38:26.499 "method": "keyring_file_add_key", 00:38:26.499 "params": { 00:38:26.499 "name": "key0", 00:38:26.499 "path": "/tmp/tmp.6o8UlfqYap" 00:38:26.499 } 00:38:26.499 }, 00:38:26.499 { 00:38:26.499 "method": "keyring_file_add_key", 00:38:26.499 "params": { 00:38:26.499 "name": "key1", 00:38:26.499 "path": "/tmp/tmp.LTOaUmNNvH" 00:38:26.499 } 00:38:26.499 } 00:38:26.499 ] 00:38:26.499 }, 00:38:26.499 { 00:38:26.499 "subsystem": "iobuf", 00:38:26.499 "config": [ 00:38:26.499 { 00:38:26.499 "method": "iobuf_set_options", 00:38:26.499 "params": { 00:38:26.499 "small_pool_count": 8192, 00:38:26.499 "large_pool_count": 1024, 00:38:26.499 "small_bufsize": 8192, 00:38:26.499 "large_bufsize": 135168, 00:38:26.499 "enable_numa": false 00:38:26.499 } 00:38:26.499 } 00:38:26.499 ] 00:38:26.499 }, 00:38:26.499 { 00:38:26.499 "subsystem": "sock", 00:38:26.499 "config": [ 00:38:26.499 { 00:38:26.499 "method": "sock_set_default_impl", 00:38:26.499 "params": { 00:38:26.499 "impl_name": "posix" 00:38:26.499 } 00:38:26.499 }, 00:38:26.499 { 00:38:26.499 "method": "sock_impl_set_options", 00:38:26.499 "params": { 00:38:26.499 "impl_name": "ssl", 00:38:26.500 "recv_buf_size": 4096, 00:38:26.500 "send_buf_size": 4096, 00:38:26.500 "enable_recv_pipe": true, 00:38:26.500 "enable_quickack": false, 00:38:26.500 "enable_placement_id": 0, 00:38:26.500 "enable_zerocopy_send_server": true, 00:38:26.500 "enable_zerocopy_send_client": false, 00:38:26.500 "zerocopy_threshold": 0, 00:38:26.500 "tls_version": 0, 00:38:26.500 "enable_ktls": false 00:38:26.500 } 00:38:26.500 }, 00:38:26.500 { 00:38:26.500 "method": "sock_impl_set_options", 00:38:26.500 "params": { 00:38:26.500 "impl_name": "posix", 00:38:26.500 "recv_buf_size": 2097152, 00:38:26.500 "send_buf_size": 2097152, 00:38:26.500 "enable_recv_pipe": true, 00:38:26.500 "enable_quickack": false, 00:38:26.500 "enable_placement_id": 0, 00:38:26.500 "enable_zerocopy_send_server": true, 00:38:26.500 "enable_zerocopy_send_client": false, 00:38:26.500 "zerocopy_threshold": 0, 00:38:26.500 "tls_version": 0, 00:38:26.500 "enable_ktls": false 00:38:26.500 } 00:38:26.500 } 00:38:26.500 ] 00:38:26.500 }, 00:38:26.500 { 00:38:26.500 "subsystem": "vmd", 00:38:26.500 "config": [] 00:38:26.500 }, 00:38:26.500 { 00:38:26.500 "subsystem": "accel", 00:38:26.500 "config": [ 00:38:26.500 { 00:38:26.500 "method": "accel_set_options", 00:38:26.500 "params": { 00:38:26.500 "small_cache_size": 128, 00:38:26.500 "large_cache_size": 16, 00:38:26.500 "task_count": 2048, 00:38:26.500 "sequence_count": 2048, 00:38:26.500 "buf_count": 2048 00:38:26.500 } 00:38:26.500 } 00:38:26.500 ] 00:38:26.500 }, 00:38:26.500 { 00:38:26.500 "subsystem": "bdev", 00:38:26.500 "config": [ 00:38:26.500 { 00:38:26.500 "method": "bdev_set_options", 00:38:26.500 "params": { 00:38:26.500 "bdev_io_pool_size": 65535, 00:38:26.500 "bdev_io_cache_size": 256, 00:38:26.500 "bdev_auto_examine": true, 00:38:26.500 "iobuf_small_cache_size": 128, 00:38:26.500 "iobuf_large_cache_size": 16 00:38:26.500 } 00:38:26.500 }, 00:38:26.500 { 00:38:26.500 "method": "bdev_raid_set_options", 00:38:26.500 "params": { 00:38:26.500 "process_window_size_kb": 1024, 00:38:26.500 "process_max_bandwidth_mb_sec": 0 00:38:26.500 } 00:38:26.500 }, 00:38:26.500 { 00:38:26.500 "method": "bdev_iscsi_set_options", 00:38:26.500 "params": { 00:38:26.500 "timeout_sec": 30 00:38:26.500 } 00:38:26.500 }, 00:38:26.500 { 00:38:26.500 "method": "bdev_nvme_set_options", 00:38:26.500 "params": { 00:38:26.500 "action_on_timeout": "none", 00:38:26.500 "timeout_us": 0, 00:38:26.500 "timeout_admin_us": 0, 00:38:26.500 "keep_alive_timeout_ms": 10000, 00:38:26.500 "arbitration_burst": 0, 00:38:26.500 "low_priority_weight": 0, 00:38:26.500 "medium_priority_weight": 0, 00:38:26.500 "high_priority_weight": 0, 00:38:26.500 "nvme_adminq_poll_period_us": 10000, 00:38:26.500 "nvme_ioq_poll_period_us": 0, 00:38:26.500 "io_queue_requests": 512, 00:38:26.500 "delay_cmd_submit": true, 00:38:26.500 "transport_retry_count": 4, 00:38:26.500 "bdev_retry_count": 3, 00:38:26.500 "transport_ack_timeout": 0, 00:38:26.500 "ctrlr_loss_timeout_sec": 0, 00:38:26.500 "reconnect_delay_sec": 0, 00:38:26.500 "fast_io_fail_timeout_sec": 0, 00:38:26.500 "disable_auto_failback": false, 00:38:26.500 "generate_uuids": false, 00:38:26.500 "transport_tos": 0, 00:38:26.500 "nvme_error_stat": false, 00:38:26.500 "rdma_srq_size": 0, 00:38:26.500 "io_path_stat": false, 00:38:26.500 "allow_accel_sequence": false, 00:38:26.500 "rdma_max_cq_size": 0, 00:38:26.500 "rdma_cm_event_timeout_ms": 0, 00:38:26.500 "dhchap_digests": [ 00:38:26.500 "sha256", 00:38:26.500 "sha384", 00:38:26.500 "sha512" 00:38:26.500 ], 00:38:26.500 "dhchap_dhgroups": [ 00:38:26.500 "null", 00:38:26.500 "ffdhe2048", 00:38:26.500 "ffdhe3072", 00:38:26.500 "ffdhe4096", 00:38:26.500 "ffdhe6144", 00:38:26.500 "ffdhe8192" 00:38:26.500 ] 00:38:26.500 } 00:38:26.500 }, 00:38:26.500 { 00:38:26.500 "method": "bdev_nvme_attach_controller", 00:38:26.500 "params": { 00:38:26.500 "name": "nvme0", 00:38:26.500 "trtype": "TCP", 00:38:26.500 "adrfam": "IPv4", 00:38:26.500 "traddr": "127.0.0.1", 00:38:26.500 "trsvcid": "4420", 00:38:26.500 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:26.500 "prchk_reftag": false, 00:38:26.500 "prchk_guard": false, 00:38:26.500 "ctrlr_loss_timeout_sec": 0, 00:38:26.500 "reconnect_delay_sec": 0, 00:38:26.500 "fast_io_fail_timeout_sec": 0, 00:38:26.500 "psk": "key0", 00:38:26.500 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:26.500 "hdgst": false, 00:38:26.500 "ddgst": false, 00:38:26.500 "multipath": "multipath" 00:38:26.500 } 00:38:26.500 }, 00:38:26.500 { 00:38:26.500 "method": "bdev_nvme_set_hotplug", 00:38:26.500 "params": { 00:38:26.500 "period_us": 100000, 00:38:26.500 "enable": false 00:38:26.500 } 00:38:26.500 }, 00:38:26.500 { 00:38:26.500 "method": "bdev_wait_for_examine" 00:38:26.500 } 00:38:26.500 ] 00:38:26.500 }, 00:38:26.500 { 00:38:26.500 "subsystem": "nbd", 00:38:26.500 "config": [] 00:38:26.500 } 00:38:26.500 ] 00:38:26.500 }' 00:38:26.500 11:39:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:26.500 [2024-11-20 11:39:19.114502] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:38:26.500 [2024-11-20 11:39:19.114555] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3070549 ] 00:38:26.500 [2024-11-20 11:39:19.197086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:26.500 [2024-11-20 11:39:19.225280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:26.761 [2024-11-20 11:39:19.367950] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:27.332 11:39:19 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:27.332 11:39:19 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:27.332 11:39:19 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:38:27.332 11:39:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:27.332 11:39:19 keyring_file -- keyring/file.sh@121 -- # jq length 00:38:27.591 11:39:20 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:38:27.591 11:39:20 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:38:27.591 11:39:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:27.591 11:39:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:27.591 11:39:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:27.591 11:39:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:27.591 11:39:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:27.591 11:39:20 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:38:27.591 11:39:20 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:38:27.591 11:39:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:27.591 11:39:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:27.591 11:39:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:27.591 11:39:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:27.591 11:39:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:27.851 11:39:20 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:38:27.851 11:39:20 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:38:27.851 11:39:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:38:27.851 11:39:20 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:38:28.111 11:39:20 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:38:28.111 11:39:20 keyring_file -- keyring/file.sh@1 -- # cleanup 00:38:28.111 11:39:20 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.6o8UlfqYap /tmp/tmp.LTOaUmNNvH 00:38:28.111 11:39:20 keyring_file -- keyring/file.sh@20 -- # killprocess 3070549 00:38:28.111 11:39:20 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3070549 ']' 00:38:28.111 11:39:20 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3070549 00:38:28.111 11:39:20 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:28.111 11:39:20 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:28.111 11:39:20 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3070549 00:38:28.111 11:39:20 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:28.111 11:39:20 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:28.111 11:39:20 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3070549' 00:38:28.111 killing process with pid 3070549 00:38:28.111 11:39:20 keyring_file -- common/autotest_common.sh@973 -- # kill 3070549 00:38:28.111 Received shutdown signal, test time was about 1.000000 seconds 00:38:28.111 00:38:28.111 Latency(us) 00:38:28.111 [2024-11-20T10:39:20.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:28.111 [2024-11-20T10:39:20.853Z] =================================================================================================================== 00:38:28.111 [2024-11-20T10:39:20.853Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:28.111 11:39:20 keyring_file -- common/autotest_common.sh@978 -- # wait 3070549 00:38:28.111 11:39:20 keyring_file -- keyring/file.sh@21 -- # killprocess 3068422 00:38:28.111 11:39:20 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3068422 ']' 00:38:28.111 11:39:20 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3068422 00:38:28.111 11:39:20 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:28.111 11:39:20 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:28.111 11:39:20 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3068422 00:38:28.371 11:39:20 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:28.371 11:39:20 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:28.371 11:39:20 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3068422' 00:38:28.371 killing process with pid 3068422 00:38:28.371 11:39:20 keyring_file -- common/autotest_common.sh@973 -- # kill 3068422 00:38:28.371 11:39:20 keyring_file -- common/autotest_common.sh@978 -- # wait 3068422 00:38:28.371 00:38:28.371 real 0m12.031s 00:38:28.371 user 0m28.902s 00:38:28.371 sys 0m2.807s 00:38:28.371 11:39:21 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:28.371 11:39:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:28.371 ************************************ 00:38:28.371 END TEST keyring_file 00:38:28.371 ************************************ 00:38:28.371 11:39:21 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:38:28.371 11:39:21 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:28.371 11:39:21 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:28.371 11:39:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:28.371 11:39:21 -- common/autotest_common.sh@10 -- # set +x 00:38:28.631 ************************************ 00:38:28.631 START TEST keyring_linux 00:38:28.631 ************************************ 00:38:28.631 11:39:21 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:28.631 Joined session keyring: 998659308 00:38:28.631 * Looking for test storage... 00:38:28.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:28.631 11:39:21 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:28.631 11:39:21 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:38:28.631 11:39:21 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:28.631 11:39:21 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:28.631 11:39:21 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:28.631 11:39:21 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:28.631 11:39:21 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:28.631 11:39:21 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:38:28.631 11:39:21 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:38:28.631 11:39:21 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:38:28.631 11:39:21 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:38:28.631 11:39:21 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:38:28.631 11:39:21 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:38:28.631 11:39:21 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:38:28.631 11:39:21 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:28.631 11:39:21 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:38:28.631 11:39:21 keyring_linux -- scripts/common.sh@345 -- # : 1 00:38:28.631 11:39:21 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:28.631 11:39:21 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:28.631 11:39:21 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:38:28.631 11:39:21 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:38:28.631 11:39:21 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:28.631 11:39:21 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:38:28.631 11:39:21 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:38:28.631 11:39:21 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:38:28.631 11:39:21 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:38:28.631 11:39:21 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:28.631 11:39:21 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:38:28.631 11:39:21 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:38:28.631 11:39:21 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:28.631 11:39:21 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:28.631 11:39:21 keyring_linux -- scripts/common.sh@368 -- # return 0 00:38:28.631 11:39:21 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:28.632 11:39:21 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:28.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:28.632 --rc genhtml_branch_coverage=1 00:38:28.632 --rc genhtml_function_coverage=1 00:38:28.632 --rc genhtml_legend=1 00:38:28.632 --rc geninfo_all_blocks=1 00:38:28.632 --rc geninfo_unexecuted_blocks=1 00:38:28.632 00:38:28.632 ' 00:38:28.632 11:39:21 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:28.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:28.632 --rc genhtml_branch_coverage=1 00:38:28.632 --rc genhtml_function_coverage=1 00:38:28.632 --rc genhtml_legend=1 00:38:28.632 --rc geninfo_all_blocks=1 00:38:28.632 --rc geninfo_unexecuted_blocks=1 00:38:28.632 00:38:28.632 ' 00:38:28.632 11:39:21 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:28.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:28.632 --rc genhtml_branch_coverage=1 00:38:28.632 --rc genhtml_function_coverage=1 00:38:28.632 --rc genhtml_legend=1 00:38:28.632 --rc geninfo_all_blocks=1 00:38:28.632 --rc geninfo_unexecuted_blocks=1 00:38:28.632 00:38:28.632 ' 00:38:28.632 11:39:21 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:28.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:28.632 --rc genhtml_branch_coverage=1 00:38:28.632 --rc genhtml_function_coverage=1 00:38:28.632 --rc genhtml_legend=1 00:38:28.632 --rc geninfo_all_blocks=1 00:38:28.632 --rc geninfo_unexecuted_blocks=1 00:38:28.632 00:38:28.632 ' 00:38:28.632 11:39:21 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:28.632 11:39:21 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:28.632 11:39:21 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:38:28.632 11:39:21 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:28.632 11:39:21 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:28.632 11:39:21 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:28.632 11:39:21 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:28.632 11:39:21 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:28.632 11:39:21 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:28.632 11:39:21 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:28.632 11:39:21 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:28.632 11:39:21 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:28.632 11:39:21 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:28.632 11:39:21 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:28.632 11:39:21 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:28.632 11:39:21 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:28.632 11:39:21 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:28.632 11:39:21 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:28.632 11:39:21 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:28.632 11:39:21 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:28.632 11:39:21 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:38:28.632 11:39:21 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:28.632 11:39:21 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:28.632 11:39:21 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:28.632 11:39:21 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.632 11:39:21 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.632 11:39:21 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.632 11:39:21 keyring_linux -- paths/export.sh@5 -- # export PATH 00:38:28.632 11:39:21 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.632 11:39:21 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:38:28.632 11:39:21 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:28.632 11:39:21 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:28.632 11:39:21 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:28.632 11:39:21 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:28.632 11:39:21 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:28.632 11:39:21 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:28.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:28.632 11:39:21 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:28.632 11:39:21 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:28.632 11:39:21 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:28.632 11:39:21 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:28.632 11:39:21 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:28.632 11:39:21 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:28.632 11:39:21 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:38:28.632 11:39:21 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:38:28.632 11:39:21 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:38:28.632 11:39:21 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:38:28.632 11:39:21 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:28.893 11:39:21 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:38:28.893 11:39:21 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:28.893 11:39:21 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:28.893 11:39:21 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:38:28.893 11:39:21 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:28.893 11:39:21 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:28.893 11:39:21 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:28.893 11:39:21 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:28.893 11:39:21 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:28.893 11:39:21 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:28.893 11:39:21 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:28.893 11:39:21 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:38:28.893 11:39:21 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:38:28.893 /tmp/:spdk-test:key0 00:38:28.893 11:39:21 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:38:28.893 11:39:21 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:28.893 11:39:21 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:38:28.893 11:39:21 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:28.893 11:39:21 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:28.893 11:39:21 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:38:28.893 11:39:21 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:28.893 11:39:21 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:28.893 11:39:21 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:28.893 11:39:21 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:28.893 11:39:21 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:28.893 11:39:21 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:28.893 11:39:21 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:28.893 11:39:21 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:38:28.893 11:39:21 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:38:28.893 /tmp/:spdk-test:key1 00:38:28.893 11:39:21 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3070998 00:38:28.893 11:39:21 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3070998 00:38:28.893 11:39:21 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:28.893 11:39:21 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3070998 ']' 00:38:28.893 11:39:21 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:28.893 11:39:21 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:28.893 11:39:21 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:28.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:28.893 11:39:21 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:28.893 11:39:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:28.893 [2024-11-20 11:39:21.518710] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:38:28.893 [2024-11-20 11:39:21.518775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3070998 ] 00:38:28.893 [2024-11-20 11:39:21.604922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:29.152 [2024-11-20 11:39:21.645965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:29.722 11:39:22 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:29.722 11:39:22 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:38:29.722 11:39:22 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:38:29.722 11:39:22 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:29.722 11:39:22 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:29.722 [2024-11-20 11:39:22.332461] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:29.722 null0 00:38:29.722 [2024-11-20 11:39:22.364518] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:29.722 [2024-11-20 11:39:22.364904] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:29.722 11:39:22 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:29.722 11:39:22 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:38:29.722 492216351 00:38:29.722 11:39:22 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:38:29.722 104181663 00:38:29.723 11:39:22 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3071324 00:38:29.723 11:39:22 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:38:29.723 11:39:22 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3071324 /var/tmp/bperf.sock 00:38:29.723 11:39:22 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3071324 ']' 00:38:29.723 11:39:22 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:29.723 11:39:22 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:29.723 11:39:22 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:29.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:29.723 11:39:22 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:29.723 11:39:22 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:29.723 [2024-11-20 11:39:22.443626] Starting SPDK v25.01-pre git sha1 4d3e9954d / DPDK 24.03.0 initialization... 00:38:29.723 [2024-11-20 11:39:22.443676] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3071324 ] 00:38:29.983 [2024-11-20 11:39:22.526869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:29.983 [2024-11-20 11:39:22.556640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:30.554 11:39:23 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:30.554 11:39:23 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:38:30.554 11:39:23 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:38:30.554 11:39:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:38:30.815 11:39:23 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:38:30.815 11:39:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:31.076 11:39:23 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:31.076 11:39:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:31.076 [2024-11-20 11:39:23.752700] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:31.336 nvme0n1 00:38:31.336 11:39:23 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:38:31.336 11:39:23 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:38:31.336 11:39:23 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:31.336 11:39:23 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:31.336 11:39:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:31.336 11:39:23 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:31.336 11:39:24 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:38:31.336 11:39:24 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:31.336 11:39:24 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:38:31.336 11:39:24 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:38:31.336 11:39:24 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:31.336 11:39:24 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:38:31.336 11:39:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:31.597 11:39:24 keyring_linux -- keyring/linux.sh@25 -- # sn=492216351 00:38:31.597 11:39:24 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:38:31.597 11:39:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:31.597 11:39:24 keyring_linux -- keyring/linux.sh@26 -- # [[ 492216351 == \4\9\2\2\1\6\3\5\1 ]] 00:38:31.597 11:39:24 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 492216351 00:38:31.597 11:39:24 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:38:31.597 11:39:24 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:31.597 Running I/O for 1 seconds... 00:38:32.978 24410.00 IOPS, 95.35 MiB/s 00:38:32.978 Latency(us) 00:38:32.978 [2024-11-20T10:39:25.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:32.978 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:32.978 nvme0n1 : 1.01 24411.18 95.36 0.00 0.00 5228.50 4014.08 13817.17 00:38:32.978 [2024-11-20T10:39:25.720Z] =================================================================================================================== 00:38:32.978 [2024-11-20T10:39:25.720Z] Total : 24411.18 95.36 0.00 0.00 5228.50 4014.08 13817.17 00:38:32.978 { 00:38:32.978 "results": [ 00:38:32.978 { 00:38:32.978 "job": "nvme0n1", 00:38:32.978 "core_mask": "0x2", 00:38:32.978 "workload": "randread", 00:38:32.978 "status": "finished", 00:38:32.978 "queue_depth": 128, 00:38:32.978 "io_size": 4096, 00:38:32.978 "runtime": 1.005236, 00:38:32.978 "iops": 24411.183045573376, 00:38:32.978 "mibps": 95.356183771771, 00:38:32.978 "io_failed": 0, 00:38:32.978 "io_timeout": 0, 00:38:32.978 "avg_latency_us": 5228.495565969817, 00:38:32.978 "min_latency_us": 4014.08, 00:38:32.978 "max_latency_us": 13817.173333333334 00:38:32.978 } 00:38:32.978 ], 00:38:32.978 "core_count": 1 00:38:32.978 } 00:38:32.978 11:39:25 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:32.978 11:39:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:32.978 11:39:25 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:38:32.978 11:39:25 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:38:32.978 11:39:25 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:32.978 11:39:25 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:32.978 11:39:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:32.978 11:39:25 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:32.978 11:39:25 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:38:32.978 11:39:25 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:32.978 11:39:25 keyring_linux -- keyring/linux.sh@23 -- # return 00:38:32.978 11:39:25 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:32.978 11:39:25 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:38:32.978 11:39:25 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:32.978 11:39:25 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:32.979 11:39:25 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:32.979 11:39:25 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:32.979 11:39:25 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:32.979 11:39:25 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:32.979 11:39:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:33.239 [2024-11-20 11:39:25.814680] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:33.239 [2024-11-20 11:39:25.815445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf61480 (107): Transport endpoint is not connected 00:38:33.239 [2024-11-20 11:39:25.816441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf61480 (9): Bad file descriptor 00:38:33.239 [2024-11-20 11:39:25.817442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:33.239 [2024-11-20 11:39:25.817449] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:33.239 [2024-11-20 11:39:25.817455] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:33.239 [2024-11-20 11:39:25.817462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:33.239 request: 00:38:33.239 { 00:38:33.239 "name": "nvme0", 00:38:33.239 "trtype": "tcp", 00:38:33.239 "traddr": "127.0.0.1", 00:38:33.239 "adrfam": "ipv4", 00:38:33.239 "trsvcid": "4420", 00:38:33.239 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:33.239 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:33.239 "prchk_reftag": false, 00:38:33.239 "prchk_guard": false, 00:38:33.239 "hdgst": false, 00:38:33.239 "ddgst": false, 00:38:33.239 "psk": ":spdk-test:key1", 00:38:33.239 "allow_unrecognized_csi": false, 00:38:33.239 "method": "bdev_nvme_attach_controller", 00:38:33.239 "req_id": 1 00:38:33.239 } 00:38:33.239 Got JSON-RPC error response 00:38:33.239 response: 00:38:33.239 { 00:38:33.239 "code": -5, 00:38:33.239 "message": "Input/output error" 00:38:33.239 } 00:38:33.239 11:39:25 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:38:33.239 11:39:25 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:33.239 11:39:25 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:33.239 11:39:25 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:33.239 11:39:25 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:38:33.239 11:39:25 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:33.239 11:39:25 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:38:33.239 11:39:25 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:38:33.239 11:39:25 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:38:33.239 11:39:25 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:33.239 11:39:25 keyring_linux -- keyring/linux.sh@33 -- # sn=492216351 00:38:33.239 11:39:25 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 492216351 00:38:33.239 1 links removed 00:38:33.239 11:39:25 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:33.239 11:39:25 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:38:33.239 11:39:25 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:38:33.239 11:39:25 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:38:33.239 11:39:25 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:38:33.239 11:39:25 keyring_linux -- keyring/linux.sh@33 -- # sn=104181663 00:38:33.239 11:39:25 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 104181663 00:38:33.240 1 links removed 00:38:33.240 11:39:25 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3071324 00:38:33.240 11:39:25 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3071324 ']' 00:38:33.240 11:39:25 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3071324 00:38:33.240 11:39:25 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:38:33.240 11:39:25 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:33.240 11:39:25 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3071324 00:38:33.240 11:39:25 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:33.240 11:39:25 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:33.240 11:39:25 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3071324' 00:38:33.240 killing process with pid 3071324 00:38:33.240 11:39:25 keyring_linux -- common/autotest_common.sh@973 -- # kill 3071324 00:38:33.240 Received shutdown signal, test time was about 1.000000 seconds 00:38:33.240 00:38:33.240 Latency(us) 00:38:33.240 [2024-11-20T10:39:25.982Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:33.240 [2024-11-20T10:39:25.982Z] =================================================================================================================== 00:38:33.240 [2024-11-20T10:39:25.982Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:33.240 11:39:25 keyring_linux -- common/autotest_common.sh@978 -- # wait 3071324 00:38:33.501 11:39:26 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3070998 00:38:33.501 11:39:26 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3070998 ']' 00:38:33.501 11:39:26 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3070998 00:38:33.501 11:39:26 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:38:33.501 11:39:26 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:33.501 11:39:26 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3070998 00:38:33.501 11:39:26 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:33.501 11:39:26 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:33.501 11:39:26 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3070998' 00:38:33.501 killing process with pid 3070998 00:38:33.501 11:39:26 keyring_linux -- common/autotest_common.sh@973 -- # kill 3070998 00:38:33.501 11:39:26 keyring_linux -- common/autotest_common.sh@978 -- # wait 3070998 00:38:33.762 00:38:33.762 real 0m5.135s 00:38:33.762 user 0m9.488s 00:38:33.762 sys 0m1.449s 00:38:33.762 11:39:26 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:33.762 11:39:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:33.762 ************************************ 00:38:33.762 END TEST keyring_linux 00:38:33.762 ************************************ 00:38:33.762 11:39:26 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:38:33.762 11:39:26 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:38:33.762 11:39:26 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:38:33.762 11:39:26 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:38:33.762 11:39:26 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:38:33.762 11:39:26 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:38:33.762 11:39:26 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:38:33.762 11:39:26 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:33.762 11:39:26 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:38:33.762 11:39:26 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:33.762 11:39:26 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:38:33.762 11:39:26 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:33.762 11:39:26 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:33.762 11:39:26 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:38:33.762 11:39:26 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:38:33.762 11:39:26 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:38:33.762 11:39:26 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:38:33.762 11:39:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:33.762 11:39:26 -- common/autotest_common.sh@10 -- # set +x 00:38:33.762 11:39:26 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:38:33.762 11:39:26 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:38:33.762 11:39:26 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:38:33.762 11:39:26 -- common/autotest_common.sh@10 -- # set +x 00:38:41.904 INFO: APP EXITING 00:38:41.904 INFO: killing all VMs 00:38:41.904 INFO: killing vhost app 00:38:41.904 WARN: no vhost pid file found 00:38:41.904 INFO: EXIT DONE 00:38:45.202 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:38:45.202 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:38:45.202 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:38:45.202 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:38:45.202 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:38:45.202 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:38:45.202 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:38:45.202 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:38:45.202 0000:65:00.0 (144d a80a): Already using the nvme driver 00:38:45.202 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:38:45.202 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:38:45.202 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:38:45.202 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:38:45.202 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:38:45.202 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:38:45.202 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:38:45.202 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:38:49.405 Cleaning 00:38:49.405 Removing: /var/run/dpdk/spdk0/config 00:38:49.405 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:49.405 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:49.405 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:49.405 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:49.405 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:49.405 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:49.405 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:49.405 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:49.405 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:49.405 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:49.405 Removing: /var/run/dpdk/spdk1/config 00:38:49.405 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:49.405 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:49.405 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:49.405 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:49.405 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:38:49.405 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:38:49.405 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:38:49.405 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:38:49.405 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:49.405 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:49.405 Removing: /var/run/dpdk/spdk2/config 00:38:49.405 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:49.405 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:49.405 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:49.405 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:49.405 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:38:49.405 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:38:49.405 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:38:49.405 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:38:49.405 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:49.405 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:49.406 Removing: /var/run/dpdk/spdk3/config 00:38:49.406 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:49.406 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:49.406 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:49.406 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:49.406 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:38:49.406 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:38:49.406 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:38:49.406 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:38:49.406 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:49.406 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:49.406 Removing: /var/run/dpdk/spdk4/config 00:38:49.406 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:49.406 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:49.406 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:49.406 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:49.406 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:38:49.406 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:38:49.406 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:38:49.406 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:38:49.406 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:49.406 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:49.406 Removing: /dev/shm/bdev_svc_trace.1 00:38:49.406 Removing: /dev/shm/nvmf_trace.0 00:38:49.406 Removing: /dev/shm/spdk_tgt_trace.pid2491763 00:38:49.406 Removing: /var/run/dpdk/spdk0 00:38:49.406 Removing: /var/run/dpdk/spdk1 00:38:49.406 Removing: /var/run/dpdk/spdk2 00:38:49.406 Removing: /var/run/dpdk/spdk3 00:38:49.406 Removing: /var/run/dpdk/spdk4 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2490266 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2491763 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2492604 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2493650 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2493989 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2495050 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2495324 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2495530 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2496667 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2497446 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2497843 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2498244 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2498637 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2498939 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2499102 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2499448 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2499833 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2500917 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2504498 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2504858 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2505227 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2505243 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2505788 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2505971 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2506437 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2506763 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2507049 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2507150 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2507504 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2507521 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2508405 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2508761 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2509181 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2513901 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2519125 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2531173 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2531860 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2537254 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2537608 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2542816 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2549965 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2553184 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2566288 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2577320 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2579348 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2580365 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2601416 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2606402 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2662969 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2669414 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2677095 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2685015 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2685051 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2686077 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2687120 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2688177 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2688776 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2688903 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2689122 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2689353 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2689362 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2690367 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2691373 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2692378 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2693046 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2693052 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2693387 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2694829 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2696136 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2705914 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2740427 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2745838 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2747839 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2750186 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2750382 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2750562 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2750890 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2751616 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2754062 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2755156 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2756307 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2759031 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2759738 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2760451 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2765517 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2772134 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2772136 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2772138 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2776793 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2786996 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2791807 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2799050 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2800538 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2802225 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2804132 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2810250 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2815474 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2820517 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2829656 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2829786 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2834975 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2835131 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2835337 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2835901 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2836006 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2841393 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2842212 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2847540 00:38:49.406 Removing: /var/run/dpdk/spdk_pid2850741 00:38:49.667 Removing: /var/run/dpdk/spdk_pid2857454 00:38:49.667 Removing: /var/run/dpdk/spdk_pid2864130 00:38:49.667 Removing: /var/run/dpdk/spdk_pid2874816 00:38:49.667 Removing: /var/run/dpdk/spdk_pid2883484 00:38:49.667 Removing: /var/run/dpdk/spdk_pid2883486 00:38:49.667 Removing: /var/run/dpdk/spdk_pid2906338 00:38:49.667 Removing: /var/run/dpdk/spdk_pid2907073 00:38:49.667 Removing: /var/run/dpdk/spdk_pid2907909 00:38:49.667 Removing: /var/run/dpdk/spdk_pid2908705 00:38:49.667 Removing: /var/run/dpdk/spdk_pid2909766 00:38:49.667 Removing: /var/run/dpdk/spdk_pid2910453 00:38:49.667 Removing: /var/run/dpdk/spdk_pid2911145 00:38:49.667 Removing: /var/run/dpdk/spdk_pid2912007 00:38:49.667 Removing: /var/run/dpdk/spdk_pid2917763 00:38:49.667 Removing: /var/run/dpdk/spdk_pid2918013 00:38:49.667 Removing: /var/run/dpdk/spdk_pid2925139 00:38:49.667 Removing: /var/run/dpdk/spdk_pid2925519 00:38:49.667 Removing: /var/run/dpdk/spdk_pid2931987 00:38:49.667 Removing: /var/run/dpdk/spdk_pid2937021 00:38:49.667 Removing: /var/run/dpdk/spdk_pid2948679 00:38:49.667 Removing: /var/run/dpdk/spdk_pid2949364 00:38:49.667 Removing: /var/run/dpdk/spdk_pid2954406 00:38:49.667 Removing: /var/run/dpdk/spdk_pid2954760 00:38:49.667 Removing: /var/run/dpdk/spdk_pid2959799 00:38:49.667 Removing: /var/run/dpdk/spdk_pid2966861 00:38:49.667 Removing: /var/run/dpdk/spdk_pid2970404 00:38:49.667 Removing: /var/run/dpdk/spdk_pid2982659 00:38:49.667 Removing: /var/run/dpdk/spdk_pid2993308 00:38:49.667 Removing: /var/run/dpdk/spdk_pid2995245 00:38:49.667 Removing: /var/run/dpdk/spdk_pid2996319 00:38:49.667 Removing: /var/run/dpdk/spdk_pid3015964 00:38:49.667 Removing: /var/run/dpdk/spdk_pid3020996 00:38:49.667 Removing: /var/run/dpdk/spdk_pid3024437 00:38:49.667 Removing: /var/run/dpdk/spdk_pid3032201 00:38:49.667 Removing: /var/run/dpdk/spdk_pid3032206 00:38:49.667 Removing: /var/run/dpdk/spdk_pid3038087 00:38:49.667 Removing: /var/run/dpdk/spdk_pid3040457 00:38:49.667 Removing: /var/run/dpdk/spdk_pid3042788 00:38:49.667 Removing: /var/run/dpdk/spdk_pid3044083 00:38:49.667 Removing: /var/run/dpdk/spdk_pid3046506 00:38:49.667 Removing: /var/run/dpdk/spdk_pid3048014 00:38:49.667 Removing: /var/run/dpdk/spdk_pid3057983 00:38:49.667 Removing: /var/run/dpdk/spdk_pid3058645 00:38:49.667 Removing: /var/run/dpdk/spdk_pid3059275 00:38:49.667 Removing: /var/run/dpdk/spdk_pid3062124 00:38:49.667 Removing: /var/run/dpdk/spdk_pid3062616 00:38:49.667 Removing: /var/run/dpdk/spdk_pid3063284 00:38:49.667 Removing: /var/run/dpdk/spdk_pid3068422 00:38:49.667 Removing: /var/run/dpdk/spdk_pid3068595 00:38:49.667 Removing: /var/run/dpdk/spdk_pid3070549 00:38:49.667 Removing: /var/run/dpdk/spdk_pid3070998 00:38:49.667 Removing: /var/run/dpdk/spdk_pid3071324 00:38:49.667 Clean 00:38:49.928 11:39:42 -- common/autotest_common.sh@1453 -- # return 0 00:38:49.928 11:39:42 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:38:49.928 11:39:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:49.928 11:39:42 -- common/autotest_common.sh@10 -- # set +x 00:38:49.928 11:39:42 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:38:49.928 11:39:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:49.928 11:39:42 -- common/autotest_common.sh@10 -- # set +x 00:38:49.928 11:39:42 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:49.928 11:39:42 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:38:49.928 11:39:42 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:38:49.928 11:39:42 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:38:49.928 11:39:42 -- spdk/autotest.sh@398 -- # hostname 00:38:49.928 11:39:42 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:38:50.189 geninfo: WARNING: invalid characters removed from testname! 00:39:16.961 11:40:08 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:18.877 11:40:11 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:20.787 11:40:13 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:22.695 11:40:14 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:24.075 11:40:16 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:25.986 11:40:18 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:27.370 11:40:19 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:27.370 11:40:19 -- spdk/autorun.sh@1 -- $ timing_finish 00:39:27.370 11:40:19 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:39:27.370 11:40:19 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:27.370 11:40:19 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:39:27.370 11:40:19 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:27.370 + [[ -n 2404818 ]] 00:39:27.370 + sudo kill 2404818 00:39:27.382 [Pipeline] } 00:39:27.399 [Pipeline] // stage 00:39:27.404 [Pipeline] } 00:39:27.418 [Pipeline] // timeout 00:39:27.425 [Pipeline] } 00:39:27.439 [Pipeline] // catchError 00:39:27.444 [Pipeline] } 00:39:27.461 [Pipeline] // wrap 00:39:27.467 [Pipeline] } 00:39:27.481 [Pipeline] // catchError 00:39:27.490 [Pipeline] stage 00:39:27.492 [Pipeline] { (Epilogue) 00:39:27.505 [Pipeline] catchError 00:39:27.507 [Pipeline] { 00:39:27.520 [Pipeline] echo 00:39:27.522 Cleanup processes 00:39:27.528 [Pipeline] sh 00:39:27.818 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:27.818 3084325 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:27.834 [Pipeline] sh 00:39:28.122 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:28.122 ++ grep -v 'sudo pgrep' 00:39:28.122 ++ awk '{print $1}' 00:39:28.122 + sudo kill -9 00:39:28.122 + true 00:39:28.134 [Pipeline] sh 00:39:28.422 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:40.666 [Pipeline] sh 00:39:40.953 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:40.953 Artifacts sizes are good 00:39:40.967 [Pipeline] archiveArtifacts 00:39:40.973 Archiving artifacts 00:39:41.101 [Pipeline] sh 00:39:41.389 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:39:41.404 [Pipeline] cleanWs 00:39:41.414 [WS-CLEANUP] Deleting project workspace... 00:39:41.414 [WS-CLEANUP] Deferred wipeout is used... 00:39:41.421 [WS-CLEANUP] done 00:39:41.423 [Pipeline] } 00:39:41.441 [Pipeline] // catchError 00:39:41.451 [Pipeline] sh 00:39:41.768 + logger -p user.info -t JENKINS-CI 00:39:41.778 [Pipeline] } 00:39:41.792 [Pipeline] // stage 00:39:41.797 [Pipeline] } 00:39:41.811 [Pipeline] // node 00:39:41.817 [Pipeline] End of Pipeline 00:39:41.851 Finished: SUCCESS